CN114694060B - Road casting detection method, electronic equipment and storage medium - Google Patents

Road casting detection method, electronic equipment and storage medium Download PDF

Info

Publication number
CN114694060B
CN114694060B CN202210230541.8A CN202210230541A CN114694060B CN 114694060 B CN114694060 B CN 114694060B CN 202210230541 A CN202210230541 A CN 202210230541A CN 114694060 B CN114694060 B CN 114694060B
Authority
CN
China
Prior art keywords
road
casting
determining
condition
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210230541.8A
Other languages
Chinese (zh)
Other versions
CN114694060A (en
Inventor
程云飞
张希
吴风炎
衣佳政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202210230541.8A priority Critical patent/CN114694060B/en
Publication of CN114694060A publication Critical patent/CN114694060A/en
Application granted granted Critical
Publication of CN114694060B publication Critical patent/CN114694060B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides a road casting detection method, electronic equipment and a storage medium, wherein the method comprises the steps of carrying out target identification on a road area image acquired by video image acquisition equipment in a road monitoring area, namely detecting the acquired road area image, judging whether a road casting exists in the road area image, determining the condition of a first road casting in the visual field range of the video image acquisition equipment, and analyzing and processing the motion attribute data of each vehicle in the road monitoring area, namely determining the condition of a second road casting outside the visual field range of the video image acquisition equipment. Therefore, the scheme combines target identification of the road area image acquired by the video image acquisition equipment and analysis processing of the motion attribute data of each vehicle in the road monitoring area, so that the detection limit of the video image acquisition equipment can be broken through, and the road casting detection of the whole road section where the road monitoring area is located is realized.

Description

Road casting detection method, electronic equipment and storage medium
Technical Field
The application relates to the technical field of vehicle-road coordination, in particular to a road casting detection method, electronic equipment and a storage medium.
Background
Road sprinklers are one of the important causes of road traffic accidents. For example, road throws tend to slow down the speed of vehicle traffic, thereby causing traffic accidents. In view of this, road throwing behavior is required to be detected in time as a traffic event affecting running safety.
At present, a manual inspection mode or a detection mode based on a video detection algorithm is generally adopted to realize detection of road casting objects, but the manual inspection mode needs to consume a great deal of manpower and material resources and cannot monitor the road casting behavior in real time; the detection mode based on the video detection algorithm has the defects of low detection precision, short detection distance and the like.
In summary, there is a need for a method for detecting road throws to detect road throws on all road segments.
Disclosure of Invention
The application provides a road casting detection method, electronic equipment and a storage medium, which are used for realizing road casting detection for a whole road section.
In a first aspect, in an exemplary embodiment of the present application, there is provided a method for detecting a road casting, including:
carrying out target recognition on the road area image acquired by the video image acquisition equipment in the road monitoring area, and determining the first road throwing object condition in the view range of the video image acquisition equipment;
Determining a second road throwing object condition outside the visual field range of the video image acquisition equipment based on the motion attribute data of each vehicle in the road monitoring area; the first road casting condition and the second road casting condition are used for indicating the road casting condition in the road monitoring area; the motion attribute data of each vehicle is reported by the vehicle or collected by radar equipment arranged in the road monitoring area.
According to the technical scheme, the video image acquisition equipment and the motion attribute data of each vehicle acquired through the radar equipment in the vehicle reporting or road monitoring area are fully fused, wherein the motion attribute data of the vehicle can be acquired and reported in real time through the sensing equipment (various sensors and the like) installed on the vehicle, the influence of environmental factors such as weather and illumination is avoided, or targets (such as vehicles) are positioned by transmitting electromagnetic waves and the like based on the radar equipment arranged in the road monitoring area, the influence of environmental factors such as weather and illumination is avoided, tracking and monitoring can be performed on targets far away, and therefore the detection range limit of the video image acquisition equipment (the detection range of the video image acquisition equipment is smaller) can be broken, and road casting detection for all road sections can be realized. Specifically, for any road monitoring area, target recognition is performed on the road area image acquired by the video image acquisition equipment in the road monitoring area, namely detection recognition is performed on the acquired road area image, so that whether a road throwing object exists in the road area image is judged, and the first road throwing object condition in the visual field range of the video image acquisition equipment can be obtained. However, since the detection range of the video image capturing apparatus is limited, the second road casting condition outside the viewing range of the video image capturing apparatus can be obtained by analyzing and processing the acquired motion attribute data of each vehicle in the road monitoring area for whether the road casting exists in the area outside the detection range (i.e., outside the viewing range) of the video image capturing apparatus. Therefore, the scheme combines target identification of the road area image acquired by the video image acquisition equipment and analysis processing of the motion attribute data of each vehicle in the road monitoring area, so that the detection limit of the video image acquisition equipment can be broken through, the detection of road casting objects on the whole road section where the road monitoring area is located can be realized, and effective support can be provided for ensuring the running safety of the vehicle.
In some exemplary embodiments, the target identifying the road area image acquired by the video image acquisition device in the road monitoring area, and determining the first road throwing object condition in the view range of the video image acquisition device includes:
dividing a road casting object area to be detected from the road area image;
performing foreground target detection on the road casting object to-be-detected area, and determining at least one first candidate object from the road casting object to-be-detected area;
Performing target feature extraction processing on the road casting object to-be-detected area, and determining at least one second candidate object from the road casting object to-be-detected area; each second candidate object is marked with a casting property or a non-casting property;
A first road casting condition within a field of view of the video image capturing device is determined based on the at least one first candidate object and the at least one second candidate object.
According to the technical scheme, the candidate object detected by carrying out foreground object detection on the area to be detected of the road throwing object and the candidate object determined by carrying out object feature extraction processing on the area to be detected of the road throwing object are overlapped and fused, the object of the suspected road throwing object is screened out preliminarily, some non-throwing object (such as a person, a vehicle and a non-motor vehicle) is eliminated, the object of the suspected road throwing object is further confirmed, whether the object of the suspected road throwing object actually belongs to the road throwing object can be accurately determined, and detection false alarm caused by light shadows can be effectively eliminated.
In some exemplary embodiments, the performing foreground object detection on the area to be detected of the road projectile includes determining at least one first candidate object from the area to be detected of the road projectile, including:
Determining at least one foreground target through a mixed Gaussian model in the area to be detected of the road casting object; each foreground object is a first candidate object;
The target feature extraction processing is performed on the road casting object to-be-detected area, and at least one second candidate object is determined from the road casting object to-be-detected area, including:
determining the at least one second candidate object through a target detection model in the area to be detected of the road casting object; the object detection model is used for identifying the attribute and the coordinate position of the object to be thrown and the object to be not thrown.
According to the technical scheme, the road casting object to-be-detected area is detected through the Gaussian mixture model, so that a plurality of foreground targets existing in the road casting object to-be-detected area can be comprehensively identified, the road casting object to-be-detected area is detected through the target detection model, and the attribute and the coordinate position of the casting object targets and the non-casting object targets existing in the road casting object to-be-detected area can be accurately detected. The number of targets detected by the Gaussian mixture model from the road casting object to-be-detected area is larger than the number of targets detected by the target detection model from the road casting object to-be-detected area, and the detection accuracy of the target detection model is higher than that of the Gaussian mixture model. Therefore, the recall rate of road casting detection can be effectively improved by combining the Gaussian mixture model and the target detection model.
In some exemplary embodiments, the determining a first road casting condition within a field of view of the video image capturing device based on the at least one first candidate object and the at least one second candidate object comprises:
Performing de-duplication processing based on the at least one first candidate object and the at least one second candidate object, and determining the first candidate object and the second candidate object with the cross ratio being greater than or equal to a first set threshold value as the same candidate object;
And determining whether the candidate object marked as the casting object attribute or the candidate object without marking belongs to the casting object through a target classification model after the duplication elimination processing, thereby obtaining the casting object condition of the first road in the view range of the video image acquisition equipment.
In the above technical solution, the candidate object marked as the object to be thrown or the candidate object without the mark can be primarily screened out by performing the de-duplication treatment on the at least one first candidate object and the at least one second candidate object, that is, performing the superposition fusion treatment on the at least one first candidate object and the at least one second candidate object, and combining the non-object to be thrown detected by the object detection model, that is, excluding the non-object to be thrown in the road area image. And then further confirming the candidate object marked as the casting object attribute or the candidate object without the mark through the target classification model, so as to determine whether the candidate object marked as the casting object attribute or the candidate object without the mark actually belongs to the road casting object, thereby effectively eliminating detection false alarm caused by light shadow and improving the detection accuracy of the road casting object.
In some exemplary embodiments, the determining a second road casting condition outside the field of view of the video image capturing apparatus based on the motion attribute data of each vehicle in the road monitoring area includes:
Acquiring motion attribute data of each vehicle, wherein the acquisition time of the motion attribute data belongs to a preset period outside the visual field range of the video image acquisition equipment;
determining, for any road surface position, a first number of vehicles passing through the road surface position and a second number of vehicles passing through the road surface position and having abnormal behavior in the preset period based on motion attribute data of the vehicles corresponding to the road surface position in the preset period; the abnormal behavior includes any one of deceleration, braking, or lane change;
A second road projectile condition for the road surface location is determined based on the first number and the second number.
According to the technical scheme, whether the road throwing object exists outside the visual field range of the video image acquisition equipment is detected, and after the motion attribute data of each vehicle with the acquisition time belonging to the preset time period are acquired, whether the road throwing object exists outside the visual field range at the road surface position can be detected through abnormal behaviors such as deceleration, braking or lane changing of each vehicle corresponding to the road surface position in the preset time period.
In some exemplary embodiments, determining the second number of vehicles having abnormal behavior comprises:
aiming at a vehicle with abnormal behaviors, determining whether overtaking or emergency avoidance of a front vehicle exists at the road surface position or not based on motion attribute data of the vehicle in each acquisition time within the preset period; if present, the second number is decremented by 1.
In the above technical scheme, in order to eliminate braking, decelerating and lane changing behaviors caused by forward collision of the vehicle, overtaking of the vehicle and the like, the motion attribute data of each vehicle with abnormal behaviors are screened out, so that the accuracy of the counted second quantity is ensured, and the false alarm rate of road casting detection can be effectively reduced.
In some exemplary embodiments, the determining a second road projectile condition for the road surface location based on the first number and the second number comprises:
If the road surface position is not at the fork, determining that the road surface position is provided with road sprinkles according to the second road sprinkle condition when the ratio of the second quantity to the first quantity is more than or equal to a second set threshold value; or when the ratio of the second number to the first number is smaller than the second set threshold, determining that the second road projectile condition is that no road projectile exists at the road surface position;
if the road surface position is at a fork, determining that the second road throwing object condition is that the road throwing object does not exist at the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold value and a third set threshold value; when the ratio of the second quantity to the first quantity is greater than or equal to the third set threshold value, determining that the second road throwing object condition is that the road throwing object exists at the road surface position; the third set threshold is greater than the second set threshold.
In the above technical solution, in order to exclude braking, decelerating or lane changing behavior of a vehicle caused by a road turnout (such as a road ramp exit), when determining a road throwing object condition of a certain road surface position, it is determined whether the road surface position is at the turnout, if the road surface position is not at the turnout region, if the ratio of the second number to the first number is greater than or equal to a second set threshold, it is determined that the road throwing object exists in the road surface position region, or if the ratio of the second number to the first number is less than the second set threshold, it is determined that the road throwing object does not exist in the road surface position region. Or if the road surface position is in the region of the turnout, determining that the road surface position region has the road throwing object when the ratio of the second quantity to the first quantity is more than or equal to a third set threshold value, or determining that the road position region has no road throwing object when the ratio of the second quantity to the first quantity is between the second set threshold value and the third set threshold value. Therefore, the false alarm rate of road throwing object detection can be effectively reduced by the scheme.
In some exemplary embodiments, further comprising:
Broadcasting the first road casting condition and the second road casting condition through at least one road side device arranged on the road where the road monitoring area is located, so that each vehicle running on the road where the road monitoring area is located can avoid the road casting.
According to the technical scheme, after the road casting condition (such as casting detection time, lane information of the road casting, road casting position information and the like) in a certain road monitoring area (such as a certain traffic road section) is determined, the road casting condition in the road monitoring area can be broadcasted through at least one road side device in the road monitoring area, so that each vehicle in the monitoring range of the at least one road side device can timely receive the road casting condition in the road monitoring area, and the road casting is decelerated or changed in advance to avoid, so that the running safety of the vehicle is effectively ensured.
In a second aspect, in an exemplary embodiment of the present application, there is provided an electronic device, including a processor and a memory, where the processor is connected to the memory, and the memory stores a computer program, and when the computer program stored in the memory is executed by the processor, the electronic device is caused to perform: carrying out target recognition on the road area image acquired by the video image acquisition equipment in the road monitoring area, and determining the first road throwing object condition in the view range of the video image acquisition equipment; determining a second road throwing object condition outside the visual field range of the video image acquisition equipment based on the motion attribute data of each vehicle in the road monitoring area; the first road casting condition and the second road casting condition are used for indicating the road casting condition in the road monitoring area; the motion attribute data of each vehicle is reported by the vehicle or collected by radar equipment arranged in the road monitoring area.
In some exemplary embodiments, the electronic device is specifically configured to perform:
dividing a road casting object area to be detected from the road area image;
performing foreground target detection on the road casting object to-be-detected area, and determining at least one first candidate object from the road casting object to-be-detected area;
Performing target feature extraction processing on the road casting object to-be-detected area, and determining at least one second candidate object from the road casting object to-be-detected area; each second candidate object is marked with a casting property or a non-casting property;
A first road casting condition within a field of view of the video image capturing device is determined based on the at least one first candidate object and the at least one second candidate object.
In some exemplary embodiments, the electronic device is specifically configured to perform:
Determining at least one foreground target through a mixed Gaussian model in the area to be detected of the road casting object; each foreground object is a first candidate object;
the electronic device is specifically configured to perform:
determining the at least one second candidate object through a target detection model in the area to be detected of the road casting object; the object detection model is used for identifying the attribute and the coordinate position of the object to be thrown and the object to be not thrown.
In some exemplary embodiments, the electronic device is specifically configured to perform:
Performing de-duplication processing based on the at least one first candidate object and the at least one second candidate object, and determining the first candidate object and the second candidate object with the cross ratio being greater than or equal to a first set threshold value as the same candidate object;
And determining whether the candidate object marked as the casting object attribute or the candidate object without marking belongs to the casting object through a target classification model after the duplication elimination processing, thereby obtaining the casting object condition of the first road in the view range of the video image acquisition equipment.
In some exemplary embodiments, the electronic device is specifically configured to perform:
Acquiring motion attribute data of each vehicle, wherein the acquisition time of the motion attribute data belongs to a preset period outside the visual field range of the video image acquisition equipment;
determining, for any road surface position, a first number of vehicles passing through the road surface position and a second number of vehicles passing through the road surface position and having abnormal behavior in the preset period based on motion attribute data of the vehicles corresponding to the road surface position in the preset period; the abnormal behavior includes any one of deceleration, braking, or lane change;
A second road projectile condition for the road surface location is determined based on the first number and the second number.
In some exemplary embodiments, the electronic device is specifically configured to perform:
aiming at a vehicle with abnormal behaviors, determining whether overtaking or emergency avoidance of a front vehicle exists at the road surface position or not based on motion attribute data of the vehicle in each acquisition time within the preset period; if present, the second number is decremented by 1.
In some exemplary embodiments, the electronic device is specifically configured to perform:
If the road surface position is not at the fork, determining that the road surface position is provided with road sprinkles according to the second road sprinkle condition when the ratio of the second quantity to the first quantity is more than or equal to a second set threshold value; or when the ratio of the second number to the first number is smaller than the second set threshold, determining that the second road projectile condition is that no road projectile exists at the road surface position;
if the road surface position is at a fork, determining that the second road throwing object condition is that the road throwing object does not exist at the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold value and a third set threshold value; when the ratio of the second quantity to the first quantity is greater than or equal to the third set threshold value, determining that the second road throwing object condition is that the road throwing object exists at the road surface position; the third set threshold is greater than the second set threshold.
In some exemplary embodiments, the electronic device is further configured to perform:
Broadcasting the first road casting condition and the second road casting condition through at least one road side device arranged on the road where the road monitoring area is located, so that each vehicle running on the road where the road monitoring area is located can avoid the road casting.
In a third aspect, an embodiment of the present application provides a computer readable storage medium storing a computer program executable by an electronic device, which when run on the electronic device, causes the electronic device to perform the road projectile detection method of any of the first aspects described above.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a road projectile detection method according to some embodiments of the present application;
Fig. 2 is a schematic diagram of detecting a road casting object within a view range of a video image capturing apparatus according to some embodiments of the present application;
fig. 3 is a schematic diagram of detecting a road casting object outside the view range of a video image capturing apparatus according to some embodiments of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 schematically illustrates a flow of a road projectile detection method according to an embodiment of the present application, where the flow may be executed by an electronic device. The electronic device may be a server or may be a component (such as a chip or an integrated circuit) capable of supporting the functions required by the server to implement the method, or may of course be other devices having the functions required to implement the method, such as a traffic control platform.
As shown in fig. 1, the process specifically includes:
And step 101, carrying out target recognition on the road area image acquired by the video image acquisition equipment in the road monitoring area, and determining the first road throwing object condition in the view range of the video image acquisition equipment.
In the embodiment of the application, a video image acquisition device (such as a video surveillance camera) and a radar device (such as a millimeter wave radar) are usually arranged on a highway or an urban road, for example, in a certain road surveillance area, the video surveillance camera, the millimeter wave radar or both the video surveillance camera and the millimeter wave radar are arranged in the road surveillance area. Or the motion attribute data of the vehicle can be collected by means of a sensor or the like arranged in the vehicle running on the road, and the collected motion attribute data of the vehicle can be reported to the road side device through vehicle-mounted equipment arranged on the vehicle. However, in consideration of the limited detection range of the video image capturing apparatus, for example, the optimum detection distance of the video image capturing apparatus is 50to 70 meters, the video image capturing apparatus can only capture video images within a viewing range, and in the case of poor external environment quality (such as heavy fog, heavy rain, etc., weather), it may only capture video images within a smaller distance range. However, the radar apparatus collects motion attribute data of a vehicle by transmitting electromagnetic wave signals, so that the radar apparatus is not affected by external environmental quality, a detection distance range is relatively large, for example, 100 meters or more, and the like, and motion attribute data of a vehicle in a longer distance range can be collected compared with the video image collection apparatus, for example, the radar apparatus is a millimeter wave radar, which is a radar in a millimeter wave frequency band, of a working frequency band. The millimeter wave radar can actively transmit electromagnetic wave signals and receive echoes, and the relative distance, the relative speed and the relative direction of a vehicle target are obtained according to the time difference between the transmitted electromagnetic wave signals and the received electromagnetic wave signals. For example, describing a traffic control platform as an execution main body for executing the technical scheme of the embodiment of the application, the traffic control platform acquires video images collected by video monitoring cameras in a certain road monitoring area and motion attribute data of each vehicle collected by millimeter wave radar in real time, and performs target recognition on the video images collected by the video monitoring cameras, so that a first road throwing object condition in a visual field of video image collecting equipment can be determined, and analysis processing is performed on the motion attribute data of each vehicle collected by the millimeter wave radar, and a second road throwing object condition outside the visual field of the video image collecting equipment can be determined, so that the road throwing object condition in the road monitoring area can be more comprehensively determined.
Specifically, for a certain road monitoring area, after a road area image acquired by a video image acquisition device in the road monitoring area is acquired, the road area image can be detected, and a road casting object area to be detected is divided from the road area image. And then carrying out foreground target detection on the road casting object to-be-detected area, namely determining at least one candidate object from the road casting object to-be-detected area, and carrying out target feature extraction processing on the road casting object to-be-detected area, namely determining at least one second candidate object from the road casting object to-be-detected area, wherein each second candidate object is marked with casting object attributes or non-casting object attributes. Then, a first road casting condition within the field of view of the video image capturing device may be determined by based on the at least one first candidate object and the at least one second candidate object.
When dividing the road casting object to-be-detected area, carrying out lane line detection on the road area image, identifying the positions of all lane lines in the road area image, and dividing the road casting object to-be-detected area from the road area image through the positions of all lane lines. For example, a lane line target in the road region image is detected by a deep learning algorithm (such as a lane line detection algorithm), so that a road casting object to be detected region is partitioned from the road region image by the coordinate position of each lane line. Among them, lane line detection algorithms are generally classified into two types: one is to make semantic segmentation or instance segmentation based on visual features, such as LaneNet and SCNN (Spatial Convolutional Neural Networks); another is to predict the point where the Lane line is located by visual features, such as Ultra-Fast-Lane-Detection. The embodiment of the application completes the Lane line Detection in the road area image by adopting an Ultra-Fast-Lane-Detection algorithm. The Ultra-Fast-Lane-Detection model structure is divided into three parts, namely a Backbone network part, an Auxiliary part and a Group Classification (component class) part. The backup part adopts a smaller ResNet network to extract image features, the Auxiliary part carries out concat and up-sampling on three-layer shallow features, the extraction capacity of visual features is enhanced, and the Group Classification part carries out line index calculation candidate points on global features to finish the selection of lane line candidate points. Therefore, the method and the device can finish the division of the area to be detected of the road casting object in the Lane through the Lane line coordinate position in the road area image detected by the Ultra-Fast-Lane-Detection algorithm. Or under the cooperative scene of the vehicle and the road, the embodiment of the application can acquire the coordinate position and the lane width of the lane through the high-precision map information, thereby also completing the division of the area to be detected of the road casting object in the lane.
Furthermore, according to the embodiment of the application, the foreground target in the road casting object to-be-detected area is detected based on the Gaussian mixture model (such as the Gaussian mixture background modeling algorithm), the target in the road casting object to-be-detected area is detected based on the target detection algorithm, and the targets detected by the two algorithms are overlapped and fused, so that preliminary screening of the suspected road casting object area can be completed, and non-casting object targets such as people, vehicles and non-motor vehicles in the road area image can be eliminated. Namely, determining at least one foreground target through a mixed Gaussian model in a road casting object to-be-detected area; wherein each foreground object is a first candidate object. The road casting object to-be-detected area is subjected to target detection model, and at least one second candidate object can be determined; the object detection model is used for identifying the attribute and the coordinate position of the object to be thrown and the object to be not thrown. Therefore, the mixed Gaussian model is used for detecting the road casting object to-be-detected area, a plurality of foreground targets existing in the road casting object to-be-detected area can be comprehensively identified, the target detection model is used for detecting the road casting object to-be-detected area, and casting object targets and non-casting object targets existing in the road casting object to-be-detected area can be accurately detected. The number of targets detected by the Gaussian mixture model from the road casting object to-be-detected area is larger than the number of targets detected by the target detection model from the road casting object to-be-detected area, and the detection accuracy of the target detection model is higher than that of the Gaussian mixture model. Therefore, the recall rate of road casting detection can be effectively improved by combining the Gaussian mixture model and the target detection model. For example, the foreground target detection is performed on the area to be detected of the road casting, that is, the set of pixels, which do not belong to the background, in the area to be detected of the road casting is found, and in the embodiment of the present application, the "background" refers to the road surface of the expressway. Therefore, before foreground object detection is performed, the pixel value of the background needs to be determined, that is, a model is built for the background. The embodiment of the application detects the foreground target in the area to be detected of the road casting object based on the mixed Gaussian background modeling algorithm. Where the mixture gaussian modeling essentially describes the range of background pixels in some form based on the change in video pixel values over a period of time. Firstly, K Gaussian distributions are distributed for each pixel point in a video image to serve as background models, and each Gaussian model comprises a pixel mean value, a variance and a weight. The background model of each pixel point is as follows:
Wherein x j,t represents the pixel value of the j-th pixel point in the video image (i.e. the area to be detected of the road casting object is taken as a video image) at the time t, if the image for background modeling in the embodiment of the application is a color image and contains a plurality of channels, x j,t represents a vector, and P (x j,t) represents the background distribution condition of the pixel point, namely the background model of the j-th pixel point at the time t. And (5) representing the weight of the ith Gaussian distribution in the mixed Gaussian background model at the time t, namely the specific gravity of the ith Gaussian distribution in the mixed Gaussian model. /(I)Represents the mean value in the ith Gaussian distribution of the jth pixel at time t,/>The covariance in the ith gaussian distribution of the jth pixel at time t is represented, and δ represents the probability density function of the gaussian distribution.
And comparing the value of the pixel point in the video image at the time t+1 with the mean value of the Gaussian mixture model, if the value is in the variance range, the background is considered, and otherwise, the foreground is considered. Therefore, the foreground and background classification of the pixel points in the video image can be realized. Namely:
|Xi,t+1i,t|≤D×σi,t
Wherein X i,t+1 represents a pixel value at time t+1, μ i,t represents a mean value of the ith gaussian distribution at time t, σ i,t represents a variance of the ith gaussian distribution at time t, D is a constant, and in the embodiment of the present application, 3 may be taken, if the pixel point X i,t+1 satisfies the above formula, the pixel point is considered to be a background point, otherwise, the pixel point is considered to be a foreground point.
Therefore, the Gaussian mixture model continuously updates the background model according to the matching result of the Gaussian mixture model by setting the learning rate, and realizes dynamic modeling of the background. For example, assuming that K in the background model formula takes 4, each pixel point represents the background model of the pixel point with 4 gaussian distributions at each moment (i.e., each frame). The mean and variance of the 4 gaussian distributions at the initial time are randomly set, and the mean of the first gaussian distribution is 10, the mean of the second gaussian distribution is 19, the third mean is 30, and the fourth mean is 40. Assuming that the variance of the four gaussian distributions is 2, the weight ω of each gaussian distribution is 0.25. If the pixel value at a certain moment is 20, the pixel value is within the variance range of the second gaussian distribution, i.e., |x i,t+1i,t|≤D×σi,t, then the pixel is a background pixel, the variance, the mean and the weight of the second gaussian distribution are updated by using the pixel value 20, and the first, third and fourth gaussian distributions are not updated. This has the effect that the gaussian distribution of the nearest pixel value is continuously increased in weight value, and P (x j,t) gradually approaches the true background value of the pixel point. If the pixel value at a certain moment is 80, the pixel value is not in the variance range of the Gaussian distribution, namely |X i,t+1i,t|>D×σi,t, the pixel point is a foreground pixel point, the minimum weight value in the first Gaussian distribution, the second Gaussian distribution, the third Gaussian distribution and the fourth Gaussian distribution is deleted, and a Gaussian distribution with the mean value of 80 is newly established to replace the deleted Gaussian distribution, so that a background model of the pixel can be dynamically established in real time. Therefore, foreground pixel points in the video image can be obtained through the Gaussian mixture model, and a foreground target area in the video image, namely a suspected area of a road throwing object, can be obtained through denoising the pixel points.
Meanwhile, the target in the area to be detected of the road casting object is detected based on a target detection algorithm. The target detection algorithm is different from the Gaussian mixture background modeling algorithm, and is used for target detection and positioning by extracting image features. Namely, determining at least one second candidate object through a target detection model in a region to be detected of the road casting object; the object detection model is used for identifying the attribute and the coordinate position of the object to be thrown and the object to be not thrown. Illustratively, the embodiment of the present application is described by taking the target detection algorithm YOLOV (You only look once version) as an example, that is, detection and positioning of the casting object in the view range of the video image capturing device are completed through the YOLOV algorithm. The YOLOV algorithm consists of three major parts, namely a Backbone network, neck (neck layer) and Head. The Backbone of YOLOV algorithm is CSPDARKNET, and the network can extract rich information features from the input image. Neck is a series of network layers that mix and combine image features and pass the image features to a prediction layer. The Neck layer of YOLOV algorithm is PANet, can finish feature extraction of feature pyramid from bottom to top and from top to bottom, is a method for aggregating parameters in different training stages of backbone network, and can improve extraction capability of object features. The Head is a detection Head for predicting image features and generating a casting detection boundary box. For example, taking a highway as an example, road casts on the highway are various in types, common casts such as cartons, packages, tires, iron blocks, stones, water bottles, traffic cones and non-casts such as pedestrians, motor vehicles and non-motor vehicles are marked by monitoring video images acquired by video image acquisition equipment on the highway, a casting detection training data set is constructed, and iterative training aiming at YOLOV algorithms is completed based on the casting detection training data set. In the process of YOLOV algorithm reasoning, a video image to be detected (i.e. the area to be detected of the road casting is taken as a video image) is input into YOLOV algorithm, and the attribute (or called type) of the casting target detected in the video image, the coordinate position of the casting target, the attribute (or called type) of the non-casting target and the coordinate position of the non-casting target can be directly returned.
Taking fig. 2 as an example, 2-a in fig. 2 is a video image to be detected, where 4 cars Car1, car2, car3, car4,2 trucks Trunk1, trunk2, and 2 throws Object1, object2 exist in the video image to be detected. And detecting the video image to be detected by adopting a mixed Gaussian background modeling algorithm, wherein the foreground objects in the video image to be detected can be detected to have Car2, car3, car4, trunk2, object1, object2 and tree shadows which are encircled by a dot-dash line shown as 2-b in fig. 2 in the visual field range. The mixed Gaussian background modeling algorithm is easily affected by environmental factors such as illumination and shadow, and tree reflection shown as 2-b in fig. 2 is mistakenly detected as a foreground object. The target detection algorithm is adopted to detect the video image to be detected, and targets in the video image to be detected can be detected to be Car2, car3, car4, trunk2 marked with non-casting properties and Object2 marked with casting properties, wherein the targets are surrounded by dotted lines as shown by 2-c in fig. 2.
In addition, the first candidate object and the second candidate object with the overlap ratio equal to or greater than the first set threshold value are determined to be the same candidate object by performing the deduplication processing on the at least one first candidate object and the at least one second candidate object, that is, performing the overlap fusion processing on the at least one first candidate object and the at least one second candidate object. The first setting threshold may be set according to experience of a person skilled in the art or may be obtained according to a result of multiple experiments or may be set according to an actual application scenario, which is not limited in the embodiment of the present application. And then, determining whether the candidate object marked as the casting object attribute or the candidate object without the mark belongs to the casting object through a target classification model after the duplication elimination processing, thereby obtaining the first road casting object condition in the visual field range of the video image acquisition equipment, namely, the non-casting object detected by combining the target detection model, namely, the candidate object marked as the casting object attribute or the candidate object without the mark can be primarily screened out, and the non-casting object in the road area image can be eliminated. For example, by fusing the targets detected by the background modeling algorithm and the target detection algorithm of the hybrid gaussian, and by calculating the IOU (Intersection Over Union, cross-over ratio) of the target frames detected by the two algorithms, it is determined whether the targets detected by the two algorithms are the same target. When the IOU is greater than the fusion threshold, the same target is considered, and otherwise, different targets are considered. And then removing the non-throwing object targets in the fused targets according to the non-throwing object targets such as people, vehicles and non-motor vehicles detected by the target detection algorithm and the coordinate positions of the non-throwing object targets, wherein the rest is a road throwing object suspected region surrounded by dotted lines as shown by 2-d in fig. 2. However, since the mixed Gaussian background modeling algorithm is extremely susceptible to light changes, false detection may occur in the road casting suspected region, and the embodiment of the application performs secondary confirmation on the road casting suspected region based on the target classification algorithm, so as to eliminate detection false alarm caused by light shadows. Specifically, the embodiment of the application completes the secondary verification of the suspected area of the road casting object based on Resnet (Residual Network). The Resnet network has two basic blocks, namely Conv Block and Identity Block, wherein the input dimension and the output dimension of Conv Block are different and are used for changing the dimension of the network; the input dimension and the output dimension of the Identity blocks are the same, and a plurality of Identity blocks are connected in series to deepen the network. The depth residual error network can solve the problems that the learning efficiency is low and the accuracy cannot be effectively improved due to deepening of the network depth, and can achieve a better feature extraction effect. Based on a monitoring video image acquired by video image acquisition equipment on a highway, the embodiment of the application intercepts the image of the casting object and the image of the non-casting object, carries out type labeling on the intercepted casting object and the non-casting object, constructs a target classification training dataset, and completes iterative training on Resnet networks based on the target classification training dataset. And then the target image (namely the casting object marked as the casting object attribute and the target object without marking) in the road casting suspected area is cut out, and the size is reduced to be fixed. Then, inputting the scaled target image to be detected into a target classification model for algorithm reasoning, and directly returning the probability that the scaled target image to be detected belongs to different casting objects and different non-casting objects, so that detection false alarms caused by tree shadows can be eliminated, namely the false alarm rate of road casting object detection can be reduced, and accurate detection of the road casting objects can be completed.
And 102, determining a second road throwing object condition outside the visual field range of the video image acquisition equipment based on the motion attribute data of each vehicle in the road monitoring area.
In the embodiment of the application, because the visual range of the video image acquisition equipment is limited, a detection algorithm adopted for detecting the video image has larger error outside the visual range of the video image acquisition equipment, and the detection algorithm cannot effectively detect road casting objects. Therefore, in order to ensure real-time detection of road casts on the whole road section, it is necessary to detect whether road casts exist outside the view range of the video image capturing device, that is, by acquiring motion state information such as speed, course angle, lane and the like of a Vehicle target based on a C-V2X (Cellular Vehicle-to-evaluation) technology, and in a period of time, predicting casts outside the view range by abnormal driving behaviors such as braking, decelerating, lane changing and the like of a plurality of vehicles. And the actions of braking, decelerating and lane changing caused by forward collision, overtaking of the vehicle and the like are eliminated through the motion state information of the adjacent vehicle targets, so that the false alarm rate of road casting detection can be reduced. And by combining the high-precision map information, the actions of braking, decelerating and changing the road of the vehicle caused by the ramp outlet of the expressway can be eliminated, so that the false alarm rate of road casting detection can be reduced. Specifically, motion attribute data of each vehicle whose acquisition time falls within a preset period outside a viewing range of a video image acquisition device is first acquired, and for any road surface position, a first number of each vehicle passing through the road surface position in the preset period and a second number of each vehicle having abnormal behavior passing through the road surface position in the preset period can be determined by based on the motion attribute data of each vehicle corresponding to the road surface position in the preset period. Wherein the abnormal behavior includes any one of deceleration, braking, or lane change. Then, a second road projectile condition for the road surface location is determined based on the first number and the second number.
Illustratively, when analyzing a road shed event, it can be found that if a road shed event occurs, a vehicle driver typically performs abnormal driving actions such as braking, decelerating, changing lanes, etc. when finding that there is a shed on the road ahead. For example, taking fig. 3 as an example, the vehicle Car2 shown in 3-a in fig. 3 is located in the third lane from the left at time t, and when it is found that the road Object1 exists in front, the vehicle Car2 performs actions such as braking and decelerating, and at time t+1, the vehicle Car2 depresses the brake pedal, and the vehicle speed decreases from time t. At time t+2, vehicle Car2 changes lane to the second lane on the left. At time t, the lane in which the vehicle Car2 is located changes.
For example, the vehicles Car1, car2, car3, trunk1, trunk2 shown in fig. 3-a are all intelligent network vehicles, and the intelligent network vehicles package real-time states such as self speed, high-precision positioning, brake pedal stepping condition and the like into a Basic safety message (Basic SAFETY MESSAGE, BSM) of the vehicle, broadcast the Basic safety message to the outside, and inform surrounding vehicles and road side equipment RSU (Road Side Unit) in real time. Assuming that the road side equipment receives real-Time state information of Num1 vehicles within a certain period of Time0, wherein the Num0 vehicles have deceleration, braking or lane changing behaviors at a certain coordinate position (Longitude 1, latitude 1), and when Num0/Num1 is greater than or equal to a certain set threshold value, the road throwing event can be judged to occur in the road. It should be noted that, if the vehicle is a non-intelligent network-connected vehicle, at this time, the road side device RSU cannot obtain the motion state information of the non-intelligent network-connected vehicle through the C-V2X technology, and then the edge computing terminal (MEC) deployed on the road side may obtain the speed, high-precision positioning, and other information of the non-intelligent network-connected vehicle through the detection information of the millimeter wave radar, the laser radar, and other sensors, and transmit the relevant information to the road side device RSU, so as to ensure real-time detection of the road throwing event.
In order to eliminate braking, decelerating and lane changing behaviors caused by forward collision of the vehicle, overtaking of the vehicle and the like, the motion attribute data of each vehicle with abnormal behaviors are screened out, so that the accuracy of the second counted number is ensured. Therefore, when determining the second number of vehicles with abnormal behaviors, for each vehicle with abnormal behaviors, determining whether the vehicle has overtaking or emergency avoidance front vehicle condition at the road surface position or not according to the motion attribute data of the vehicle in each acquisition time within a preset period, and subtracting 1 from the second number if the vehicle has overtaking or emergency avoidance front vehicle condition at the road surface position. For example, if the vehicle Car3 shown in 3-b in fig. 3 is in the second lane of the left number at time t, is in the first lane of the left number at time t+1, and the relative speed of the vehicle Car3 exceeds the vehicle Car2, and the relative position of the vehicle Car3 exceeds the vehicle Car2 at time t+2, it may be determined that the lane change behavior of the vehicle Car3 is caused by the overtaking at this time, and the lane change data is not counted. Or the road side device RSU receives real-time state information of the vehicles Car2 and Car3, when the vehicle Car3 is found to have actions such as braking and decelerating at the moment t, meanwhile, the relative distance between the vehicle Car2 and the vehicle Car3 is analyzed, when the relative distance between the vehicle Car2 and the vehicle Car3 at the moment t is found to be smaller than the relative distance at the moment t-1, the braking action of the vehicle Car3 at the moment can be judged to be caused by the emergency avoidance of the front vehicle Car2, and the braking data are not counted.
In addition, in order to eliminate the braking, decelerating or lane changing behavior of the vehicle caused by the road turnout (such as the exit of the road ramp), when determining the road throwing object condition of a certain road surface position, it is determined whether the road surface position is at the turnout, if the road surface position is not at the turnout, when the ratio of the second number to the first number is greater than or equal to the second set threshold value, it is determined that the road throwing object condition of the road surface position is the road throwing object at the road surface position, or when the ratio of the second number to the first number is less than the second set threshold value, it is determined that the road throwing object condition of the road surface position is not the road throwing object at the road surface position. Or if the road surface position is at the fork, determining that the second road throwing object condition of the road surface position is the road throwing object at the road surface position when the ratio of the second quantity to the first quantity is greater than or equal to a third set threshold, or determining that the second road throwing object condition of the road surface position is the road throwing object at the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold and the third set threshold. Therefore, the false alarm rate of road throwing object detection can be effectively reduced by the scheme. Wherein the third set threshold is greater than the second set threshold; the second set threshold and the third set threshold may be set according to experience of those skilled in the art or may be obtained according to results obtained by multiple experiments or may be set according to actual application scenarios, which is not limited in the embodiment of the present application.
For example, the vehicle Car3 shown in 3-c in fig. 3 is in the second lane of the right number at time t, changes lane to the first lane of the right number at time t+1, and changes lane in which the vehicle Car3 is located compared to time t. The road side device RSU receives real-Time status information of the vehicle Car3, and if the road side device receives real-Time status information of a number of num1 vehicles in a second lane of the right number altogether within a certain period of Time0, wherein the number of num0 vehicles finally change to the first lane of the right number, and according to the high-precision map information, the road side device RSU detects that the current vehicle is in a high-speed ramp area, and then when the ratio of num0/num1 is always in a normal numerical range (between the second set threshold value and the third set threshold value), the road change behavior of the vehicle is determined to be normal, because the vehicle needs to normally change to the first lane of the right number so as to drive away from the current road from the ramp mouth, so that no road shed event can be determined to occur in the second lane of the right number. If the ratio of num0/num1 is not within the normal range of values (i.e., greater than the third set threshold), it may be determined that a road shed event occurs in the second lane on the right.
Finally, after determining the road throwing object condition (including the first road throwing object condition and the second road throwing object condition, such as the throwing object detection time, the lane information where the road throwing object is located, the road throwing object position information and the like) in a certain road monitoring area (such as a certain traffic road section), the road throwing object condition in the road monitoring area can be broadcasted through at least one road side device in the road monitoring area, so that each vehicle in the monitoring range where the at least one road side device is located can timely receive the road throwing object condition in the road monitoring area through an On Board Unit (On Board Unit) installed by the vehicle-mounted device, and speed reduction or lane change avoidance is performed in advance for the road throwing object, thereby effectively ensuring the running safety of the vehicle. Meanwhile, the road side equipment RSU can upload the detected road casting information to the road supervision platform, so that the patrol personnel can timely acquire the road casting information and correspondingly process the road casting information.
The above embodiment shows that the technical solution of the present application fully utilizes the video image acquisition device and the motion attribute data of each vehicle acquired by the radar device in the vehicle reporting or road monitoring area, where the motion attribute data of the vehicle can be acquired and reported in real time by the sensing device (various sensors, etc.) installed on the vehicle itself, so that the detection range limitation of the video image acquisition device (the detection range of the video image acquisition device is smaller), and the detection of the road casting object on the whole road section can be realized. Specifically, for any road monitoring area, target recognition is performed on the road area image acquired by the video image acquisition equipment in the road monitoring area, namely detection recognition is performed on the acquired road area image, so that whether a road throwing object exists in the road area image is judged, and the first road throwing object condition in the visual field range of the video image acquisition equipment can be obtained. However, since the detection range of the video image capturing apparatus is limited, the second road casting condition outside the viewing range of the video image capturing apparatus can be obtained by analyzing and processing the acquired motion attribute data of each vehicle in the road monitoring area for whether the road casting exists in the area outside the detection range (i.e., outside the viewing range) of the video image capturing apparatus. Therefore, the scheme combines the target recognition of the road area image acquired by the video image acquisition equipment and the analysis processing of the motion attribute data of each vehicle in the road monitoring area, namely, the advantages of the vehicle cooperative technology equipment such as the video monitoring camera, the millimeter wave radar, the edge computing terminal and the C-V2X technology are fully integrated, the detection range of the video monitoring camera is not influenced by environmental factors such as weather, illumination and background pixels, the automatic detection of the road throwing object can be realized, the labor cost and the material resource cost can be effectively reduced, the detection limit of the video image acquisition equipment can be broken, the road throwing object detection of the whole road section where the road monitoring area is located is realized, and effective support can be provided for ensuring the running safety of the vehicle.
Based on the same technical concept, fig. 4 illustrates an electronic device provided by the embodiment of the application, where the electronic device may execute a flow of the road projectile detection method. The electronic device may be a server or may be a component (such as a chip or an integrated circuit) capable of supporting the functions required by the server to implement the method, or may of course be other devices having the functions required to implement the method, such as a traffic control platform.
As shown in fig. 4, the electronic device includes a processor 401 and a memory 402. The specific connection medium between the processor 401 and the memory 402 is not limited in the embodiment of the present application, and the processor 401 and the memory 402 in fig. 4 are exemplified by a bus connection. The buses may be divided into address buses, data buses, control buses, etc. The memory 402 stores a computer program that, when executed by the processor 401, causes the electronic device to perform: carrying out target recognition on the road area image acquired by the video image acquisition equipment in the road monitoring area, and determining the first road throwing object condition in the view range of the video image acquisition equipment; determining a second road throwing object condition outside the visual field range of the video image acquisition equipment based on the motion attribute data of each vehicle in the road monitoring area; the first road casting condition and the second road casting condition are used for indicating the road casting condition in the road monitoring area; the motion attribute data of each vehicle is reported by the vehicle or collected by radar equipment arranged in the road monitoring area.
In some exemplary embodiments, the electronic device is specifically configured to perform:
dividing a road casting object area to be detected from the road area image;
performing foreground target detection on the road casting object to-be-detected area, and determining at least one first candidate object from the road casting object to-be-detected area;
Performing target feature extraction processing on the road casting object to-be-detected area, and determining at least one second candidate object from the road casting object to-be-detected area; each second candidate object is marked with a casting property or a non-casting property;
A first road casting condition within a field of view of the video image capturing device is determined based on the at least one first candidate object and the at least one second candidate object.
In some exemplary embodiments, the electronic device is specifically configured to perform:
Determining at least one foreground target through a mixed Gaussian model in the area to be detected of the road casting object; each foreground object is a first candidate object;
the electronic device is specifically configured to perform:
determining the at least one second candidate object through a target detection model in the area to be detected of the road casting object; the object detection model is used for identifying the attribute and the coordinate position of the object to be thrown and the object to be not thrown.
In some exemplary embodiments, the electronic device is specifically configured to perform:
Performing de-duplication processing based on the at least one first candidate object and the at least one second candidate object, and determining the first candidate object and the second candidate object with the cross ratio being greater than or equal to a first set threshold value as the same candidate object;
And determining whether the candidate object marked as the casting object attribute or the candidate object without marking belongs to the casting object through a target classification model after the duplication elimination processing, thereby obtaining the casting object condition of the first road in the view range of the video image acquisition equipment.
In some exemplary embodiments, the electronic device is specifically configured to perform:
Acquiring motion attribute data of each vehicle, wherein the acquisition time of the motion attribute data belongs to a preset period outside the visual field range of the video image acquisition equipment;
determining, for any road surface position, a first number of vehicles passing through the road surface position and a second number of vehicles passing through the road surface position and having abnormal behavior in the preset period based on motion attribute data of the vehicles corresponding to the road surface position in the preset period; the abnormal behavior includes any one of deceleration, braking, or lane change;
A second road projectile condition for the road surface location is determined based on the first number and the second number.
In some exemplary embodiments, the electronic device is specifically configured to perform:
aiming at a vehicle with abnormal behaviors, determining whether overtaking or emergency avoidance of a front vehicle exists at the road surface position or not based on motion attribute data of the vehicle in each acquisition time within the preset period; if present, the second number is decremented by 1.
In some exemplary embodiments, the electronic device is specifically configured to perform:
If the road surface position is not at the fork, determining that the road surface position is provided with road sprinkles according to the second road sprinkle condition when the ratio of the second quantity to the first quantity is more than or equal to a second set threshold value; or when the ratio of the second number to the first number is smaller than the second set threshold, determining that the second road projectile condition is that no road projectile exists at the road surface position;
if the road surface position is at a fork, determining that the second road throwing object condition is that the road throwing object does not exist at the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold value and a third set threshold value; when the ratio of the second quantity to the first quantity is greater than or equal to the third set threshold value, determining that the second road throwing object condition is that the road throwing object exists at the road surface position; the third set threshold is greater than the second set threshold.
In some exemplary embodiments, the electronic device is further configured to perform:
Broadcasting the first road casting condition and the second road casting condition through at least one road side device arranged on the road where the road monitoring area is located, so that each vehicle running on the road where the road monitoring area is located can avoid the road casting.
In the embodiment of the present application, the memory 402 stores instructions executable by the at least one processor 401, and the at least one processor 401 may execute the steps included in the road projectile detection method by executing the instructions stored in the memory 402.
The processor 401 is a control center of the electronic device, and may connect various parts of the electronic device using various interfaces and lines, and implement data processing by executing or executing instructions stored in the memory 402 and calling data stored in the memory 402. Alternatively, the processor 401 may include one or more processing units, and the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, etc., and the modem processor mainly processes an issue instruction. It will be appreciated that the modem processor described above may not be integrated into the processor 401. In some embodiments, processor 401 and memory 402 may be implemented on the same chip, and in some embodiments they may be implemented separately on separate chips.
The processor 401 may be a general purpose processor such as a Central Processing Unit (CPU), digital signal processor, application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, etc., that may implement or perform the methods, steps, and logic diagrams disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in connection with the road projectile detection method embodiments may be embodied directly in hardware processor execution or in a combination of hardware and software modules in a processor.
Memory 402 is a non-volatile computer-readable storage medium that can be used to store non-volatile software programs, non-volatile computer-executable programs, and modules. The Memory 402 may include at least one type of storage medium, which may include, for example, flash Memory, hard disk, multimedia card, card Memory, random access Memory (Random Access Memory, RAM), static random access Memory (Static Random Access Memory, SRAM), programmable Read-Only Memory (Programmable Read Only Memory, PROM), read-Only Memory (ROM), charged erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, EEPROM), magnetic Memory, magnetic disk, optical disk, and the like. Memory 402 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 402 in embodiments of the present application may also be circuitry or any other device capable of performing memory functions for storing program instructions and/or data.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (9)

1. A method of detecting a road projectile, comprising:
carrying out target recognition on the road area image acquired by the video image acquisition equipment in the road monitoring area, and determining the first road throwing object condition in the view range of the video image acquisition equipment;
Determining a second road throwing object condition outside the visual field range of the video image acquisition equipment based on the motion attribute data of each vehicle in the road monitoring area; the first road casting condition and the second road casting condition are used for indicating the road casting condition in the road monitoring area; the motion attribute data of each vehicle are reported by the vehicle or collected by radar equipment arranged in the road monitoring area;
The determining, based on the motion attribute data of each vehicle in the road monitoring area, a second road throwing object condition outside the viewing area of the video image collecting apparatus includes:
Acquiring motion attribute data of each vehicle, wherein the acquisition time of the motion attribute data belongs to a preset period outside the visual field range of the video image acquisition equipment;
determining, for any road surface position, a first number of vehicles passing through the road surface position and a second number of vehicles passing through the road surface position and having abnormal behavior in the preset period based on motion attribute data of the vehicles corresponding to the road surface position in the preset period; the abnormal behavior includes any one of deceleration, braking, or lane change;
A second road projectile condition for the road surface location is determined based on the first number and the second number.
2. The method of claim 1, wherein the performing object recognition on the road area image acquired by the video image acquisition device in the road monitoring area, and determining the first road casting condition in the view range of the video image acquisition device, comprises:
dividing a road casting object area to be detected from the road area image;
performing foreground target detection on the road casting object to-be-detected area, and determining at least one first candidate object from the road casting object to-be-detected area;
Performing target feature extraction processing on the road casting object to-be-detected area, and determining at least one second candidate object from the road casting object to-be-detected area; each second candidate object is marked with a casting property or a non-casting property;
A first road casting condition within a field of view of the video image capturing device is determined based on the at least one first candidate object and the at least one second candidate object.
3. The method of claim 2, wherein the performing foreground object detection on the road projectile to-be-detected area, determining at least one first candidate object from the road projectile to-be-detected area, comprises:
Determining at least one foreground target through a mixed Gaussian model in the area to be detected of the road casting object; each foreground object is a first candidate object;
The target feature extraction processing is performed on the road casting object to-be-detected area, and at least one second candidate object is determined from the road casting object to-be-detected area, including:
determining the at least one second candidate object through a target detection model in the area to be detected of the road casting object; the object detection model is used for identifying the attribute and the coordinate position of the object to be thrown and the object to be not thrown.
4. The method of claim 2, wherein the determining a first road jettisoning condition within a field of view of the video image acquisition device based on the at least one first candidate object and the at least one second candidate object comprises:
Performing de-duplication processing based on the at least one first candidate object and the at least one second candidate object, and determining the first candidate object and the second candidate object with the cross ratio being greater than or equal to a first set threshold value as the same candidate object;
And determining whether the candidate object marked as the casting object attribute or the candidate object without marking belongs to the casting object through a target classification model after the duplication elimination processing, thereby obtaining the casting object condition of the first road in the view range of the video image acquisition equipment.
5. The method of claim 1, wherein determining the second number of vehicles having abnormal behavior by:
aiming at a vehicle with abnormal behaviors, determining whether overtaking or emergency avoidance of a front vehicle exists at the road surface position or not based on motion attribute data of the vehicle in each acquisition time within the preset period; if present, the second number is decremented by 1.
6. The method of claim 1, wherein the determining a second road projectile condition for the road surface location based on the first number and the second number comprises:
If the road surface position is not at the fork, determining that the road surface position is provided with road sprinkles according to the second road sprinkle condition when the ratio of the second quantity to the first quantity is more than or equal to a second set threshold value; or when the ratio of the second number to the first number is smaller than the second set threshold, determining that the second road projectile condition is that no road projectile exists at the road surface position;
if the road surface position is at a fork, determining that the second road throwing object condition is that the road throwing object does not exist at the road surface position when the ratio of the second quantity to the first quantity is between the second set threshold value and a third set threshold value; when the ratio of the second quantity to the first quantity is greater than or equal to the third set threshold value, determining that the second road throwing object condition is that the road throwing object exists at the road surface position; the third set threshold is greater than the second set threshold.
7. The method as recited in claim 1, further comprising:
Broadcasting the first road casting condition and the second road casting condition through at least one road side device arranged on the road where the road monitoring area is located, so that each vehicle running on the road where the road monitoring area is located can avoid the road casting.
8. An electronic device comprising a processor and a memory, the processor being coupled to the memory, the memory storing a computer program that, when executed by the processor, causes the electronic device to perform: carrying out target recognition on the road area image acquired by the video image acquisition equipment in the road monitoring area, and determining the first road throwing object condition in the view range of the video image acquisition equipment; determining a second road throwing object condition outside the visual field range of the video image acquisition equipment based on the motion attribute data of each vehicle in the road monitoring area; the first road casting condition and the second road casting condition are used for indicating the road casting condition in the road monitoring area; the motion attribute data of each vehicle are reported by the vehicle or collected by radar equipment arranged in the road monitoring area;
the electronic device is specifically configured to perform:
Acquiring motion attribute data of each vehicle, wherein the acquisition time of the motion attribute data belongs to a preset period outside the visual field range of the video image acquisition equipment;
determining, for any road surface position, a first number of vehicles passing through the road surface position and a second number of vehicles passing through the road surface position and having abnormal behavior in the preset period based on motion attribute data of the vehicles corresponding to the road surface position in the preset period; the abnormal behavior includes any one of deceleration, braking, or lane change;
A second road projectile condition for the road surface location is determined based on the first number and the second number.
9. A computer readable storage medium, characterized in that it stores a computer program executable by an electronic device, which when run on the electronic device causes the electronic device to perform the method of any one of claims 1 to 7.
CN202210230541.8A 2022-03-10 2022-03-10 Road casting detection method, electronic equipment and storage medium Active CN114694060B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210230541.8A CN114694060B (en) 2022-03-10 2022-03-10 Road casting detection method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210230541.8A CN114694060B (en) 2022-03-10 2022-03-10 Road casting detection method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114694060A CN114694060A (en) 2022-07-01
CN114694060B true CN114694060B (en) 2024-05-03

Family

ID=82137209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210230541.8A Active CN114694060B (en) 2022-03-10 2022-03-10 Road casting detection method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114694060B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147441A (en) * 2022-07-31 2022-10-04 江苏云舟通信科技有限公司 Cutout special effect processing system based on data analysis
CN116453065B (en) * 2023-06-16 2023-09-19 云途信息科技(杭州)有限公司 Road surface foreign matter throwing identification method and device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339677A (en) * 2016-08-23 2017-01-18 天津光电高斯通信工程技术股份有限公司 Video-based railway wagon dropped object automatic detection method
CN106781570A (en) * 2016-12-30 2017-05-31 大唐高鸿信息通信研究院(义乌)有限公司 A kind of highway danger road conditions suitable for vehicle-mounted short distance communication network are recognized and alarm method
CN109212520A (en) * 2018-09-29 2019-01-15 河北德冠隆电子科技有限公司 The road conditions perception accident detection alarm system and method for comprehensive detection radar
CN109886219A (en) * 2019-02-26 2019-06-14 中兴飞流信息科技有限公司 Shed object detecting method, device and computer readable storage medium
CN111274982A (en) * 2020-02-04 2020-06-12 浙江大华技术股份有限公司 Method and device for identifying projectile and storage medium
CN112037266A (en) * 2020-11-05 2020-12-04 北京软通智慧城市科技有限公司 Falling object identification method and device, terminal equipment and storage medium
CN112149649A (en) * 2020-11-24 2020-12-29 深圳市城市交通规划设计研究中心股份有限公司 Road spray detection method, computer equipment and storage medium
CN112330658A (en) * 2020-11-23 2021-02-05 丰图科技(深圳)有限公司 Sprinkler detection method, device, electronic device, and storage medium
CN114119653A (en) * 2021-09-28 2022-03-01 浙江大华技术股份有限公司 Sprinkler detection method, device, electronic device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10754344B2 (en) * 2018-07-19 2020-08-25 Toyota Research Institute, Inc. Method and apparatus for road hazard detection

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339677A (en) * 2016-08-23 2017-01-18 天津光电高斯通信工程技术股份有限公司 Video-based railway wagon dropped object automatic detection method
CN106781570A (en) * 2016-12-30 2017-05-31 大唐高鸿信息通信研究院(义乌)有限公司 A kind of highway danger road conditions suitable for vehicle-mounted short distance communication network are recognized and alarm method
CN109212520A (en) * 2018-09-29 2019-01-15 河北德冠隆电子科技有限公司 The road conditions perception accident detection alarm system and method for comprehensive detection radar
CN109886219A (en) * 2019-02-26 2019-06-14 中兴飞流信息科技有限公司 Shed object detecting method, device and computer readable storage medium
CN111274982A (en) * 2020-02-04 2020-06-12 浙江大华技术股份有限公司 Method and device for identifying projectile and storage medium
CN112037266A (en) * 2020-11-05 2020-12-04 北京软通智慧城市科技有限公司 Falling object identification method and device, terminal equipment and storage medium
CN112330658A (en) * 2020-11-23 2021-02-05 丰图科技(深圳)有限公司 Sprinkler detection method, device, electronic device, and storage medium
CN112149649A (en) * 2020-11-24 2020-12-29 深圳市城市交通规划设计研究中心股份有限公司 Road spray detection method, computer equipment and storage medium
CN114119653A (en) * 2021-09-28 2022-03-01 浙江大华技术股份有限公司 Sprinkler detection method, device, electronic device, and storage medium

Also Published As

Publication number Publication date
CN114694060A (en) 2022-07-01

Similar Documents

Publication Publication Date Title
Tian et al. An automatic car accident detection method based on cooperative vehicle infrastructure systems
US11074813B2 (en) Driver behavior monitoring
US10885777B2 (en) Multiple exposure event determination
US11836985B2 (en) Identifying suspicious entities using autonomous vehicles
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN114694060B (en) Road casting detection method, electronic equipment and storage medium
JP6571545B2 (en) Object detection apparatus and object detection method
CN111382768A (en) Multi-sensor data fusion method and device
EP3403219A1 (en) Driver behavior monitoring
CN104616502A (en) License plate identification and positioning system based on combined type vehicle-road video network
Liu et al. Vehicle detection and ranging using two different focal length cameras
CN113378751A (en) Traffic target identification method based on DBSCAN algorithm
Abdel-Aty et al. Using closed-circuit television cameras to analyze traffic safety at intersections based on vehicle key points detection
CN112489383A (en) Early warning system and method for preventing red light running accident based on machine vision
CN115618932A (en) Traffic incident prediction method and device based on internet automatic driving and electronic equipment
CN116587978A (en) Collision early warning method and system based on vehicle-mounted display screen
CN114414259A (en) Anti-collision test method and device for vehicle, electronic equipment and storage medium
Lai et al. Sensor fusion of camera and MMW radar based on machine learning for vehicles
CN117315407B (en) Method and device for identifying object, storage medium and electronic device
US20240020964A1 (en) Method and device for improving object recognition rate of self-driving car
US11592565B2 (en) Flexible multi-channel fusion perception
CN117799633A (en) Prediction method and device and intelligent driving equipment
Zhan et al. Vehicle Forward Collision Warning Based on Improved Deep Neural Network
CN116721248A (en) Target detection method and target detection device
CN114879194A (en) Control method and device for multi-radar vehicle data homologous fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant