WO2022262471A1 - 异常车辆的检测方法、装置、设备及存储介质 - Google Patents

异常车辆的检测方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022262471A1
WO2022262471A1 PCT/CN2022/091653 CN2022091653W WO2022262471A1 WO 2022262471 A1 WO2022262471 A1 WO 2022262471A1 CN 2022091653 W CN2022091653 W CN 2022091653W WO 2022262471 A1 WO2022262471 A1 WO 2022262471A1
Authority
WO
WIPO (PCT)
Prior art keywords
image sequence
abnormal
detection frame
vehicle
detection
Prior art date
Application number
PCT/CN2022/091653
Other languages
English (en)
French (fr)
Inventor
吴捷
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Priority to US18/262,806 priority Critical patent/US20240071215A1/en
Publication of WO2022262471A1 publication Critical patent/WO2022262471A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Definitions

  • Embodiments of the present disclosure relate to the technical field of image processing, and in particular, to a method, device, equipment, and storage medium for detecting abnormal vehicles.
  • Related technologies are usually based on deep learning methods to detect abnormally stopped vehicles on the road.
  • related technologies due to the relatively small number of abnormal vehicle stop samples and insufficient sample labeling, related technologies generally can only use samples of normal driving vehicles to train the detection model, and identify significant behaviors that deviate from the normal driving state as abnormal stop behaviors.
  • the present disclosure provides a method for detecting abnormal vehicles, including:
  • the method also includes:
  • the determining the detection frame located on the target road in the background image as the detection frame of the abnormal vehicle in which an abnormal parking event occurs includes:
  • the method also includes:
  • At least one of the background images is determined from the background image of the at least part of the video frame.
  • the start time and end time of the image sequence are determined as the start time and end time of the abnormal parking event corresponding to the image sequence, wherein the abnormal parking event corresponding to the image sequence is The abnormal parking event of the same abnormal vehicle corresponding to the image sequence.
  • the determining at least one image sequence composed of background images from the background images of at least part of the video frames includes:
  • the target detection frame refers to the detection of abnormal vehicles frame
  • the at least one image sequence includes a first image sequence and a second image sequence, the first image sequence corresponds to an abnormal parking event of a first abnormal vehicle, and the second image sequence corresponds to a second abnormal Abnormal parking events of vehicles;
  • the method also includes:
  • the start time and end time of the image sequence as the start time and end time of the abnormal parking event corresponding to the image sequence, in response to the first abnormal vehicle
  • the intersection ratio between the detection frame in the first frame of the first image sequence and the detection frame of the second abnormal vehicle in the first frame of the second image sequence is greater than a third threshold, then it is determined that the The first abnormal parking event and the second abnormal parking event are the same abnormal parking event;
  • the method also includes:
  • the method also includes:
  • the video frame in which the same abnormal vehicle appears for the first time among the plurality of video frames is determined as the target frame
  • the method also includes:
  • the earliest start time and latest end time corresponding to the plurality of abnormal parking events are determined as the start time and end time of the same abnormal parking event.
  • the present disclosure provides a detection device for an abnormal vehicle, including:
  • a video acquisition unit configured to acquire the monitoring video of the target road
  • a background image extraction unit configured to perform background modeling processing based on the surveillance video, to obtain background images of at least some video frames in the surveillance video;
  • a first vehicle detection unit configured to perform vehicle detection processing on the background image
  • a detection frame determining unit configured to determine a detection frame located on the target road in the background image as a detection frame of an abnormal vehicle in which an abnormal parking event occurs.
  • the device also includes:
  • a mask extraction unit configured to perform differential mask extraction processing on the surveillance video to obtain a mask of the target road
  • the detection frame determination unit includes:
  • the first intersection and union ratio calculation subunit is used to determine the intersection and union ratio between the detection frame in the background image and the mask
  • the detection frame determination subunit is configured to determine the detection frame whose intersection ratio is greater than the first threshold as the detection frame of the abnormal vehicle in which the abnormal parking event occurs.
  • the device also includes:
  • An image sequence construction unit configured to determine at least one image sequence composed of background images from the background images of at least part of the video frames, wherein the background images in the same image sequence include the detection frame of the same abnormal vehicle;
  • the abnormal parking event determination unit is configured to, for each image sequence, determine the start time and end time of the image sequence as the start time and end time of the abnormal parking event corresponding to the image sequence, wherein the image
  • the abnormal parking event corresponding to the sequence refers to the abnormal parking event of the same abnormal vehicle corresponding to the image sequence.
  • the image sequence construction unit includes:
  • the second intersection ratio calculation subunit is used to calculate the intersection ratio of the target detection frame in one of the background images and the target detection frame in the other background image for any two background images whose shooting interval is less than a preset interval,
  • the target detection frame refers to the detection frame of an abnormal vehicle;
  • a detection frame association subunit configured to determine that the The first target detection frame and the second target detection frame are detection frames of the same abnormal vehicle
  • the image sequence combination subunit is used for adding the one background image and the other background image into the same image sequence.
  • the at least one image sequence includes a first image sequence and a second image sequence, the first image sequence corresponds to an abnormal parking event of a first abnormal vehicle, and the second image sequence corresponds to a second abnormal An abnormal parking event of the vehicle; and, the device also includes:
  • An abnormal parking event fusion unit configured to respond to the detection frame of the first abnormal vehicle in the first frame of the first image sequence and the detection frame of the second abnormal vehicle in the first frame of the second image sequence The intersection and union ratio of the detection frame is greater than the third threshold, then it is determined that the first abnormal parking event and the second abnormal parking event are the same abnormal parking event;
  • An image sequence combining unit configured to combine the first image sequence and the second image sequence into one image sequence.
  • the device further includes: a second vehicle detection unit, configured to, for each image sequence, perform vehicle detection on other video frames located before the image sequence in the surveillance video;
  • a detection frame deletion unit configured to delete the detection frame of the same abnormal vehicle from the target image sequence in response to the detection frame of the same abnormal vehicle corresponding to the image sequence being included in the other video frames;
  • the abnormal parking event determination unit is configured to execute the step of converting the image sequence to each image sequence when the other video frame does not include the detection frame of the same abnormal vehicle corresponding to the image sequence
  • the start time and end time of are determined as the start time and end time of the abnormal parking event corresponding to the image sequence.
  • the device also includes:
  • a video frame acquisition unit for each image sequence, based on the correspondence between the background image and the video frame, to obtain a plurality of video frames corresponding to the image sequence;
  • a third vehicle detection unit configured to perform vehicle detection on the plurality of video frames
  • a target frame determination unit configured to determine, based on the detection result, the video frame in which the same abnormal vehicle appears for the first time among the plurality of video frames as the target frame;
  • a deleting unit configured to delete a part in the image sequence whose shooting time is earlier than the target frame.
  • the device further includes: an abnormal parking event merging unit, configured to determine a plurality of abnormal parking events whose starting time is within a preset time as the same abnormal parking event; The corresponding earliest start time and latest end time are determined as the start time and end time of the same abnormal parking event.
  • an abnormal parking event merging unit configured to determine a plurality of abnormal parking events whose starting time is within a preset time as the same abnormal parking event; The corresponding earliest start time and latest end time are determined as the start time and end time of the same abnormal parking event.
  • the present disclosure provides a computer device, including:
  • a memory a memory
  • a processor coupled to the memory, the processor configured to perform the detection method as described in any one of the preceding items based on instructions stored in the memory.
  • the present disclosure provides a non-transitory computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the detection method described in any one of the preceding items is implemented.
  • the present disclosure provides a computer program, including: instructions, which when executed by a processor cause the processor to perform the detection method as described in any one of the preceding items.
  • the present disclosure provides a non-transitory computer program product comprising instructions which, when executed by a processor, cause the processor to perform the detection method as described in any one of the preceding items.
  • FIG. 1 is a schematic structural diagram of an abnormal vehicle detection system provided by an embodiment of the present disclosure
  • FIG. 2 is a flow chart of a method for detecting an abnormal vehicle provided by an embodiment of the present disclosure
  • FIG. 3 is a flow chart of a method for detecting abnormal vehicles provided by another embodiment of the present disclosure.
  • FIG. 4 is a video frame image of a surveillance video provided by an embodiment of the present disclosure.
  • Fig. 5 is the mask of the target road obtained for the target road in Fig. 4;
  • Fig. 6 is a schematic flowchart of some steps of the method for detecting an abnormal vehicle provided by an embodiment of the present disclosure
  • Fig. 7 is a flowchart of a method for determining an image sequence provided by some embodiments of the present disclosure.
  • Fig. 8 is a schematic diagram of an image sequence provided by an embodiment of the present disclosure.
  • Fig. 9 is a partial flowchart of an abnormal vehicle detection method provided by some embodiments of the present disclosure.
  • FIG. 10 is a partial flow diagram of a method for detecting an abnormal vehicle provided in some embodiments of the present disclosure.
  • Fig. 11 is a partial flow chart of a method for detecting abnormal vehicles provided by some embodiments of the present disclosure.
  • Fig. 12 is a schematic structural diagram of an abnormal vehicle detection device provided by an embodiment of the present disclosure.
  • Fig. 13 is a schematic structural diagram of a server in an embodiment of the present disclosure.
  • inventions of the present disclosure provide a method, device, device and storage medium for detecting abnormal vehicles.
  • Fig. 1 is a schematic structural diagram of an abnormal vehicle detection system provided by an embodiment of the present disclosure; as shown in Fig. 1 , the abnormal vehicle detection system 100 may include at least one image acquisition device 101, and a computer communicatively connected to the aforementioned image acquisition device 101 device 102.
  • the image acquisition device 101 can be installed on the equipment tower next to the target road or on a high-rise building, and is used to photograph the target road to obtain a monitoring video of the target road, wherein the road monitored by the image acquisition device 101 is called the target road , the target road may be, for example, a trunk road, a vehicle intersection, a road section with a high incidence of traffic accidents, etc., but is not limited to the roads listed here.
  • the computer device 102 is used to store and process the monitoring video of the target road, and determine the abnormal vehicle in which the abnormal parking event occurs in the monitoring video, wherein the abnormal vehicle refers to a vehicle whose parking time on the road exceeds a preset threshold, such as a vehicle in a traffic accident Wait.
  • the computer device 102 in the implementation of the present disclosure may be interpreted as a server or a server cluster and other devices with storage and image processing functions.
  • Fig. 2 is a flow chart of a method for detecting an abnormal vehicle provided by an embodiment of the present disclosure, and the method may be executed by the computer device in Fig. 1 exemplarily.
  • the abnormal vehicle detection method may include step S101 - step S104.
  • Step S101 Obtain the monitoring video of the target road.
  • the surveillance video of the target road can be understood as the video of the target road collected by the image acquisition device within a certain period of time.
  • the surveillance video of the target road includes both stationary and moving objects.
  • Stationary objects can include green belts, guardrails, lane lines, and various road signs, as well as vehicles that have stopped on the road for more than a preset threshold.
  • the moving objects may include normal driving vehicles, pedestrians, etc. on the target road.
  • the computer device obtains the monitoring video of the target road by loading from the local storage, or may obtain the monitoring video of the target road by downloading from an image acquisition device or other equipment. It should be noted that, the manner of obtaining surveillance video in the embodiments of the present disclosure is not limited to the aforementioned loading from local storage and downloading from other devices, and other manners known in the art may also be used.
  • Step S102 Perform background modeling processing based on the surveillance video to obtain background images of at least some video frames in the surveillance video.
  • the pixels used to represent the aforementioned static objects constitute the background image of the video frame
  • the pixels used to represent the aforementioned moving object constitute the foreground image of the video frame.
  • the background image of at least part of the video frames in the surveillance video is obtained, which is based on the comparison of the pixels of the adjacent video frames in the surveillance video frame, and the pixels in the adjacent video frames that have not changed are determined. is the pixel of the background image, thereby obtaining the background image
  • the background modeling method in the present embodiment can be the forward background modeling method or the backward background modeling method, the method of the background modeling is similar to the related technology, I won't go into details here.
  • the background modeling algorithm used in the embodiment of the present disclosure is not limited to the aforementioned forward or backward background modeling algorithm, and various background modeling algorithms known in the field of image processing can also be used .
  • Step S103 Carry out vehicle detection processing on the background image.
  • a preset vehicle detection model is used to perform vehicle detection processing on the background image to obtain the vehicle in the background image.
  • the vehicle in the background image is the vehicle that has an abnormal parking event. Since the vehicle stops on the road, the vehicle is kept in the background image as the background when modeling the background.
  • the vehicle detection model is trained by using vehicle sample images and is used to identify vehicles in the images.
  • the vehicle detection model adopted can be interpreted as an exemplary Cascader R-CNN model, but is not limited to the Cascader R-CNN model, the Cascader R-CNN model is a multi-level target detection model, Overfitting during vehicle detection model training and quality mismatch during inference can be avoided.
  • vehicle detection model used in the embodiments of the present disclosure is not limited to the Cascader R-CNN model, and in other applications in the embodiments of the present disclosure, the vehicle detection model may also be other models known in the art.
  • the vehicle detection model detects that there is a vehicle in the background image, it forms a detection frame with the minimum size that can contain the vehicle on the background image.
  • the detection frame may be a rectangular frame.
  • the vehicle recognition model after the vehicle recognition model recognizes the detection frame of the vehicle, it can also output the confidence that the detection frame corresponds to the vehicle; in practical applications, the detection frame whose prediction confidence is less than the preset threshold can be deleted , only keep the detection boxes with confidence greater than the preset threshold.
  • Step S104 Determine the detection frame on the target road in the background image as the detection frame of the abnormal vehicle that has an abnormal parking event.
  • the objects in the background image are all stationary objects, if a vehicle is recognized in the background image, it can be determined that the vehicle is a stationary vehicle.
  • the stationary vehicle on the target road can be understood as an abnormal vehicle that has an abnormal parking event, so the detection frame of the vehicle that is identified from the background image and is on the target road can be determined as the abnormal vehicle that has an abnormal parking event. detection box.
  • the abnormal vehicle detection method uses the background modeling method to perform background modeling on the acquired monitoring video of the target road to obtain the background image of at least part of the video frames in the monitoring video, and then uses the vehicle recognition model Vehicle detection is performed on the background image, and the detection frame on the target road in the background image is determined as the detection frame of the abnormal vehicle that has an abnormal parking event, which can improve the accuracy of abnormal vehicle detection.
  • the abnormal vehicle detection method Compared with the method of inputting surveillance video into a pre-trained abnormal vehicle recognition model and directly using the abnormal vehicle recognition model to identify abnormal vehicles in the video, the abnormal vehicle detection method provided by the embodiments of the present disclosure overcomes the small number of abnormal vehicle samples and the abnormal The vehicle recognition model has poor ability to recognize abnormal vehicles in unknown scenes.
  • Fig. 3 is a flow chart of a method for detecting abnormal vehicles provided by another embodiment of the present disclosure.
  • the abnormal vehicle detection method includes step S201-step S206.
  • the execution methods and beneficial effects of steps S201-S203 are similar to the above-mentioned steps S101-S103.
  • step S204-step S206 will be specifically described, and other steps can refer to the previous description.
  • Step S201 Obtain the monitoring video of the target road.
  • Step S202 Perform background modeling processing based on the surveillance video to obtain background images of at least some video frames in the surveillance video.
  • Step S203 Carry out vehicle detection processing on the background image.
  • Step S204 Perform differential mask extraction processing on the surveillance video to obtain the mask of the target road.
  • a mask (also known as a mask) is a selection used to frame specific areas of an image.
  • the mask of the target road is used to frame the mask of the target road in each video frame of the surveillance video.
  • each video frame of the surveillance video may include not only the target road area, but also a building area adjacent to the target road. There may be a parking lot in the adjacent building area, and the vehicles in the parking lot are not the abnormal vehicles referred to in this embodiment.
  • the mask of the target road is used to exclude the vehicle detection frame outside the target road, and then avoid identifying the vehicle outside the target road as an abnormal vehicle.
  • performing differential mask extraction processing on the surveillance video may include: selecting two video frames in the surveillance video, comparing the pixels at the same position of the two video frames, and if the pixels at the same position change , then this position is determined as the position on the target road, so that the mask of the target road can be determined through continuous analysis and comparison.
  • a video frame can be selected every 5 frames as a video frame for determining a mask; compare the extracted pixels at the same position of two adjacent video frames , if the pixel difference at the same position exceeds a preset threshold, the position is determined to be the position on the target road. Therefore, the masking of the target road can be determined through continuous analysis and comparison.
  • FIG. 4 is a video frame image of a monitoring video provided by an embodiment of the present disclosure
  • FIG. 5 is a mask of a target road obtained for the target road in FIG. 4
  • the video frame image 400 in addition to the target road 401 , also includes a building 402 beside the target road 401 and a green belt 403 beside the target road 401 .
  • the mask 404 of the target road can be obtained, and the area of the mask is the area of the target road in the video frame.
  • step S204 provided in the embodiment of the present disclosure may be performed before steps S202-S203, may also be performed after steps S202-S203, or may be performed in parallel with steps S202-S203.
  • S205 Determine the intersection ratio between the detection frame and the mask in the background image.
  • S206 Determine the detection frame whose intersection ratio is greater than the first threshold as the detection frame of the abnormal vehicle in which the abnormal parking event occurs.
  • the first threshold can be set as required. If the intersection ratio of the detection frame and the mask of the target road is greater than the first threshold, it indicates that the detection frame is at least mostly located in the mask of the target road, which means that the vehicle is located in the target road, and the detection frame of the vehicle can be determined as Detection boxes for abnormal vehicles with abnormal parking events.
  • the mask of the target road and the detection frame in the background image are used to determine the detection frame of the abnormal vehicle, which can avoid identifying vehicles in non-target road areas as abnormal vehicles.
  • Fig. 6 is a schematic flowchart of some steps of the method for detecting an abnormal vehicle provided by an embodiment of the present disclosure. As shown in FIG. 6, in some embodiments of the present disclosure, after the detection frame of the background image located on the target road is determined as the detection frame of the abnormal vehicle in which the abnormal parking event occurs, the aforementioned steps S101-S104 or step S201 are executed. -After S206, steps S301-S302 may also be included.
  • Step S301 Determine at least one image sequence composed of background images from the background images of at least part of the video frames.
  • the background images in the same image sequence include the detection frame of the same abnormal vehicle, and the background images are sorted according to the shooting time of the corresponding video frames.
  • Fig. 7 is a flowchart of a method for determining an image sequence provided by some embodiments of the present disclosure. As shown in FIG. 7 , in some embodiments of the present disclosure, the method for determining an image sequence may include steps S3011-S3013.
  • Object detection boxes refer to detection boxes that represent abnormal vehicles in the background image.
  • the preset interval may be set as required, and is not specifically limited in this embodiment. If the shooting interval of the two background images is less than the preset interval, the target detection frame in the two background images can be calculated as the intersection ratio; if the shooting interval of the two background images is greater than or equal to the preset interval, the two The target detection frame in the background image is used to calculate the intersection and union ratio.
  • step S3011 if the intersection ratio of the target detection frame in the two background images is larger, it indicates that the position coincidence degree of the target detection frame in the two background images is higher; the smaller the intersection ratio, it indicates that the target detection frame in the two background images The smaller the position coincidence of the target detection frame is.
  • S3012 In response to the intersection ratio between the first target detection frame in one background image and the second target detection frame in the other background image being greater than a second threshold, determine that the first target detection frame and the second target detection frame are Detection boxes of the same abnormal vehicle.
  • the second threshold is a value used to determine whether the target detection frames in the two background images are the detection frames of the same abnormal vehicle, and the second threshold can be set as required.
  • the shooting interval of the two background images is less than the preset interval, and the intersection ratio of the first object detection frame in one background image and the second object detection frame in the other background image is greater than the second threshold, it characterizes two object detections
  • the frame is the detection frame of the same abnormal vehicle, otherwise it is determined that the two target detection frames are not the detection frame of the same vehicle.
  • the two background images After determining that two target detection frames in the two background images are the detection frames of the same abnormal vehicle, the two background images can be put into the same image sequence so that the image sequence corresponds to the abnormality of the same abnormal vehicle. parking incident.
  • Fig. 8 is a schematic diagram of an image sequence provided by an embodiment of the present disclosure.
  • each frame in Fig. 8 respectively represents a background image in the image sequence, and each frame has a detection frame of the same abnormal vehicle; by arranging multiple background images in time order, and multiple The detection frames in the background image are connected to represent a space-time tube of an abnormal vehicle parking event. Through the space-time tube, the process of the abnormal vehicle from the abnormal parking event to the end of the abnormal parking event can be visualized.
  • the schematic diagram shown in FIG. 8 is only an example.
  • the method shown in FIG. 8 may not be used to display the image sequence, but an array
  • the form of represents an image sequence; for example, the array may include the vertex coordinates of the detection frame of the same abnormal vehicle in each background image.
  • step S302 can be executed.
  • Step S302 For each image sequence, determine the start time and end time of the image sequence as the start time and end time of the abnormal parking event corresponding to the image sequence.
  • each image sequence corresponds to an abnormal parking event of an abnormal vehicle
  • the time corresponding to the start image frame of each image sequence is taken as the start time of the image sequence
  • the time corresponding to the end image frame of each image sequence is taken as The end time of the image sequence can determine the start time and end time of the abnormal parking event of the abnormal vehicle.
  • At least one image sequence determined in step S301 may include a first image sequence and a second image sequence.
  • the first image sequence corresponds to the abnormal parking event of the first abnormal vehicle
  • the second image sequence corresponds to the abnormal parking event of the second abnormal vehicle.
  • Fig. 9 is a schematic flowchart of a part of the abnormal vehicle detection method provided by some embodiments of the present disclosure. As shown in Fig. Steps S303-S304 may also be included before.
  • S303 In response to the intersection-over-union ratio of the detection frame of the first abnormal vehicle in the first frame of the first image sequence and the detection frame of the second abnormal vehicle in the first frame of the second image sequence being greater than a third threshold, determine The first abnormal parking event and the second abnormal parking event are the same abnormal parking event.
  • the third threshold is a threshold for judging whether the first abnormal vehicle and the second abnormal vehicle have collided, scratched or other accidents.
  • intersection ratio of the detection frame of the first frame of the first abnormal vehicle in the first image sequence and the detection frame of the first frame of the second abnormal vehicle in the second image sequence is greater than the third threshold, it indicates the first abnormality
  • the vehicle and the second abnormal vehicle have a collision or scratch accident, and the two are abnormal parking due to the same accident, so the first abnormal parking event and the second abnormal parking event can be determined as the same abnormal parking event.
  • S304 Merge the first image sequence and the second image sequence into one image sequence.
  • the first image sequence and the second image sequence may be combined into one image sequence.
  • the merged image sequence corresponds to the same aforementioned abnormal parking event.
  • merging the first image sequence and the second image sequence into the same image sequence may be to calculate the union set of the first image sequence and the second image sequence, and combine them according to the shooting order of each background image in the image sequence
  • the background images are sorted to obtain a merged image sequence.
  • each abnormal parking event caused by the same traffic accident can be treated as the same abnormal parking event, thereby facilitating subsequent analysis of the cause of the abnormal parking event and the abnormal parking event the occurrence process.
  • the vehicle detection model may recognize stationary objects such as manhole covers and street signs as vehicles, and construct image sequences based on the detection frames of such stationary objects, but such image sequences are not really used to characterize abnormal parking events image sequence.
  • the embodiment of the present disclosure also provides a method for eliminating this part of false detections.
  • FIG. 10 is a partial flowchart of a method for detecting an abnormal vehicle provided in some embodiments of the present disclosure. As shown in FIG. 10 , in order to eliminate false detection, in some embodiments of the present disclosure, after the aforementioned step S301 is performed, step S305-step S306 may also be included.
  • Step S305 For each image sequence, perform vehicle detection on other video frames located before the image sequence in the surveillance video.
  • step S305 the vehicle detection model mentioned in the aforementioned step S103 may be used to perform vehicle detection on other video frames located before the image sequence in the surveillance video, so as to obtain vehicle detection frames in other video frames.
  • Step S306 In response to the detection frame of the same abnormal vehicle corresponding to the image sequence included in other video frames, delete the detection frame of the same abnormal vehicle from the target image sequence.
  • step S306 it is necessary to judge whether the detection frame of the abnormal vehicle corresponding to the image sequence is included in other video frames; if the detection frame of the vehicle in the other video frame and the detection frame of the abnormal vehicle corresponding to the image sequence If the ratio is greater than the preset threshold, it can be determined that the other video frames include the detection frame of the abnormal vehicle corresponding to the image sequence.
  • the aforementioned step S302 may specifically be step S3021.
  • Step S3021 In response to other video frames not including the detection frame of the same abnormal vehicle corresponding to the image sequence, for each image sequence, determine the start time and end time of the image sequence as the abnormal parking event corresponding to the image sequence The start time and end time of the step.
  • the start time and end time of the image sequence can be directly used as the start time and end time of the abnormal parking event of the abnormal vehicle.
  • the abnormal vehicle detection method By executing steps S305-S306, the abnormal vehicle detection method provided by the embodiment of the present disclosure can identify the image sequence of the non-abnormal vehicle formed by false detection, and obtain a more accurate image sequence representing the abnormal parking event of the abnormal vehicle.
  • the embodiment of the present disclosure also provides a method to calibrate the start time of the abnormal parking event.
  • Fig. 11 is a partial flow chart of the abnormal vehicle detection method provided by some embodiments of the present disclosure. As shown in FIG. 11 , in some embodiments of the present disclosure, in order to avoid that the start time of the image sequence corresponding to the abnormal parking event of the abnormal vehicle is advanced compared with the real time, steps S307-S310 may be included before the above step S302 .
  • Step S307 For each image sequence, based on the correspondence between the background image and the video frame, multiple video frames corresponding to the image sequence are obtained.
  • each background image corresponds to an original video frame in the surveillance video, so according to the one-to-one correspondence between the background image and the video frame, multiple video frames corresponding to the background image in the image sequence can be found.
  • Step S308 Perform vehicle detection on multiple video frames.
  • step S308 the vehicle detection model mentioned above can be used to detect the vehicle on the video frame corresponding to the background image in the image sequence, so as to identify the detection frame of the vehicle in the corresponding video frame.
  • Step S309 Based on the detection result, determine the video frame in which the same abnormal vehicle appears for the first time among multiple video frames as the target frame.
  • the 2nd video frame is determined as target frame.
  • Step S310 Delete the part in the image sequence whose shooting time is earlier than the target frame.
  • the detection frame of the abnormal vehicle appeared for the first time in the target frame, it can be determined that the abnormal parking event of the abnormal vehicle occurred at the beginning of the target frame, so the part in the image sequence whose shooting time is earlier than the target frame can be deleted to improve The accuracy of the start time of abnormal parking events.
  • step S309 if the second frame is the target frame, then correspondingly delete the first frame image in the image sequence, and the deleted image sequence includes 4 background images.
  • the abnormal vehicle detection method may further include step S311-step S312.
  • Step S311 Determining multiple abnormal parking events whose starting time is within a preset time as the same abnormal parking event.
  • an abnormal parking event occurs due to a traffic accident on the target road, problems such as road congestion may be caused by the previous abnormal parking event within a preset period of time, causing the subsequent vehicles to also have abnormal parking events. Because the subsequent abnormal parking event is related to the previous abnormal parking event, the subsequent abnormal parking event and the previous abnormal parking event can be identified as the same abnormal parking event; therefore, in order to facilitate the analysis of abnormal parking time In this embodiment, multiple abnormal parking events whose starting time is within a preset time may be determined as the same abnormal parking event.
  • Step S312 Determine the earliest start time and latest end time corresponding to multiple abnormal parking events as the start time and end time of the same abnormal parking event.
  • the earliest starting event corresponding to multiple abnormal parking events can be used as the starting time of the same abnormal parking event, and the latest ending event corresponding to multiple abnormal parking events as the end time of the same abnormal parking event.
  • FIG. 12 is a schematic structural diagram of an abnormal vehicle detection device provided by an embodiment of the present disclosure.
  • the detection device can be understood as the above-mentioned computer device or some functional modules in the computer device.
  • the detection device 1200 includes: a video acquisition unit 1201 , a background image extraction unit 1202 , a first vehicle detection unit 1203 and a detection frame determination unit 1204 .
  • the video acquisition unit 1201 is used to obtain the monitoring video of the target road;
  • the background image extraction unit 1202 is used to perform background modeling processing based on the monitoring video, and obtains the background image of at least some video frames in the monitoring video;
  • the first vehicle detection unit 1203 is used to The background image is subjected to vehicle detection processing;
  • the detection frame determination unit 1204 is used to determine the detection frame on the target road in the background image as the detection frame of an abnormal vehicle that has an abnormal parking event.
  • a mask extraction unit is also included; the mask extraction unit is used to perform differential mask extraction processing on the surveillance video to obtain the mask of the target road; correspondingly, the detection frame determination unit includes a first intersection The combination ratio calculation subunit and the detection frame determination subunit; the first intersection ratio calculation subunit is used to determine the intersection ratio of the detection frame in the background image and the mask; the detection frame determination subunit is used to make the intersection ratio greater than the first A detection frame of a threshold is determined as a detection frame of an abnormal vehicle in which an abnormal parking event occurs.
  • the abnormal vehicle detection device further includes an image sequence construction unit and an abnormal parking event determination unit.
  • the image sequence construction unit is used to determine at least one image sequence composed of background images from the background images of at least part of the video frames, wherein the background images in the same image sequence include the detection frame of the same abnormal vehicle; the abnormal parking event determination unit uses For each image sequence, the start time and end time of the image sequence are determined as the start time and end time of the abnormal parking event corresponding to the image sequence, wherein the abnormal parking event corresponding to the image sequence refers to the same An abnormal parking event of an abnormal vehicle.
  • the image sequence construction unit may include a second intersection and union ratio calculation subunit, a detection frame association subunit, and an image sequence combination subunit;
  • the second intersection and union ratio calculation subunit is used for any two background images whose shooting interval is less than the preset interval, calculate the intersection ratio of the target detection frame in one of the background images and the target detection frame in the other background image, the target detection frame refers to the detection frame of the abnormal vehicle;
  • the detection frame association The subunit is used to determine the first target detection frame and the second target detection frame in response to an intersection ratio between the first target detection frame in one of the background images and the second target detection frame in the other background image being greater than a second threshold
  • the boxes are the detection boxes of the same abnormal vehicle;
  • the image sequence combination subunit is used to join one of the background images and another background image into the same image sequence.
  • At least one image sequence includes a first image sequence and a second image sequence, the first image sequence corresponds to the abnormal parking event of the first abnormal vehicle, and the second image sequence corresponds to the second abnormal vehicle abnormal parking event; in this case, the abnormal vehicle detection device also includes: an abnormal parking event fusion unit and an image sequence combination unit.
  • the abnormal parking event fusion unit is configured to respond to an intersection ratio between the detection frame of the first abnormal vehicle in the first frame of the first image sequence and the detection frame of the second abnormal vehicle in the first frame of the second image sequence greater than
  • the third threshold determines that the first abnormal parking event and the second abnormal parking event are the same abnormal parking event
  • the image sequence merging unit is configured to merge the first image sequence and the second image sequence into one image sequence.
  • the device further includes a second vehicle detection unit, and the second vehicle detection unit is configured to perform vehicle detection on other video frames located before the image sequence in the surveillance video for each image sequence; corresponding , the abnormal vehicle detection device further includes a detection frame deletion unit and an abnormal parking event determination unit.
  • the detection frame deletion unit is used to delete the detection frame of the same abnormal vehicle from the target image sequence in response to the detection frame of the same abnormal vehicle corresponding to the image sequence included in other video frames; the abnormal parking event determination unit is used in other video frames.
  • the detection frame of the same abnormal vehicle corresponding to the image sequence is not included, for each image sequence, determine the start time and end time of the image sequence as the start time and end time of the abnormal parking event corresponding to the image sequence.
  • the abnormal vehicle detection device further includes a video frame acquisition unit, a third vehicle detection unit, a target frame determination unit, and a deletion unit: the video frame acquisition unit is used for each image sequence, based on the background image and The corresponding relationship of the video frames is to obtain a plurality of video frames corresponding to the image sequence; the third vehicle detection unit is used to detect the vehicle on a plurality of video frames; the target frame determination unit is used to select the first vehicle in the plurality of video frames The video frame in which the same abnormal vehicle appears for the first time is determined as the target frame; the deletion unit is used to delete the part in the image sequence whose shooting time is earlier than the target frame.
  • the abnormal vehicle detection device further includes an abnormal parking event merging unit, configured to determine a plurality of abnormal parking events whose starting time is within a preset time as the same abnormal parking event; The earliest start time and latest end time corresponding to the abnormal parking event are determined as the start time and end time of the same abnormal parking event.
  • an abnormal parking event merging unit configured to determine a plurality of abnormal parking events whose starting time is within a preset time as the same abnormal parking event; The earliest start time and latest end time corresponding to the abnormal parking event are determined as the start time and end time of the same abnormal parking event.
  • the device provided in this embodiment can execute the method of any one of the embodiments in FIGS. 1-11 above, and its execution mode and beneficial effects are similar, and details are not repeated here.
  • An embodiment of the present disclosure also provides a computer device, the computer device includes a processor and a memory, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, any of the above-mentioned embodiments in FIGS. 1-11 can be realized. Methods.
  • FIG. 13 is a schematic structural diagram of a computer device in an embodiment of the present disclosure. Referring specifically to FIG. 13 , it shows a schematic structural diagram of a computer device 1000 suitable for implementing an embodiment of the present disclosure.
  • computer equipment 1300 may include a processing device 1301 (such as a central processing unit, a graphics processing unit, etc.), which may be loaded into a random access memory RAM 1303 according to a program stored in a read-only memory ROM 1302 or from a storage device 1308. Various appropriate actions and processing are performed by the program. In the RAM 1303, various programs and data necessary for the operation of the computer device 1300 are also stored.
  • the processing device 1301, ROM 1302, and RAM 1303 are connected to each other through a bus 1304.
  • An input/output (I/O) interface 1305 is also connected to the bus 1304 .
  • the following devices can be connected to the I/O interface 1305: input devices 1306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 1307 such as a computer; a storage device 1308 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1309.
  • the communication means 1309 may allow the computer device 1300 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 13 shows computer device 1300 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 1309, or from storage means 1308, or from ROM 1302.
  • the processing device 1301 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the computer device can communicate with any currently known or future-developed network protocol such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with any form or medium of digital Data communication (eg, communication network) interconnections.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks ("LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned computer device, or may exist independently without being incorporated into the computer device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the computer device, the computer device: obtains the monitoring video of the target road; performs background modeling processing based on the monitoring video, and obtains the monitoring A background image of at least part of the video frames in the video; performing vehicle detection processing on the background image; determining the detection frame on the target road in the background image as the detection frame of an abnormal vehicle that has an abnormal parking event.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or computing device.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, in which a computer program is stored.
  • a computer program is stored.
  • the method in any of the above-mentioned embodiments in FIGS. 1-11 can be implemented.
  • the execution method and The beneficial effects are similar and will not be repeated here.

Abstract

一种异常车辆的检测方法、装置、设备及存储介质,其中,该方法包括:获取目标道路的监控视频(S101);基于监控视频进行背景建模处理,得到监控视频中至少部分视频帧的背景图像(S102);对背景图像进行车辆检测处理(S103);将背景图像中位于目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框(S104)。

Description

异常车辆的检测方法、装置、设备及存储介质
相关申请的交叉引用
本申请是以CN申请号为202110667910.5,申请日为2021年6月16日的申请为基础,并主张其优先权,该CN申请的公开内容在此作为整体引入本申请中。
技术领域
本公开实施例涉及图像处理技术领域,尤其涉及一种异常车辆的检测方法、装置、设备及存储介质。
背景技术
相关技术通常基于深度学习的方法对道路上异常停止的车辆进行检测。但是,由于车辆异常停止的样本比较少,样本标注不够精细等原因,相关技术一般只能利用正常行驶车辆的样本来训练检测模型,并将偏离正常行驶状态的显著行为识别为异常停止行为。
发明内容
一方面,本公开提供一种异常车辆的检测方法,包括:
获取目标道路的监控视频;
基于所述监控视频进行背景建模处理,得到所述监控视频中至少部分视频帧的背景图像;
对所述背景图像进行车辆检测处理;
将所述背景图像中位于所述目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框。
可选地,所述方法还包括:
在所述获取目标道路的监控视频之后,对所述监控视频进行差分掩膜提取处理,得到所述目标道路的掩膜;
其中,所述将所述背景图像中位于所述目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框,包括:
确定所述背景图像中的检测框与所述掩膜的交并比;
将所述交并比大于第一阈值的检测框确定为发生异常停车事件的异常车辆的检测框。
可选地,所述方法还包括:
在所述将所述背景图像中位于所述目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框之后,从所述至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列,其中同一图像序列中的背景图像包括同一异常车辆的检测框;
针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间,其中所述图像序列对应的异常停车事件是指所述图像序列对应的所述同一异常车辆的异常停车事件。
可选地,所述从所述至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列,包括:
针对任意两个拍摄间隔小于预设间隔的背景图像,计算其中一个背景图像中的目标检测框与另一个背景图像中的目标检测框的交并比,所述目标检测框是指异常车辆的检测框;
响应于所述其中一个背景图像中的第一目标检测框与所述另一个背景图像中的第二目标检测框的交并比大于第二阈值,则确定所述第一目标检测框和所述第二目标检测框是同一异常车辆的检测框;
将所述其中一个背景图像和所述另一个背景图像加入同一图像序列。
可选地,所述至少一个图像序列中包括第一图像序列和第二图像序列,所述第一图像序列对应于第一异常车辆的异常停车事件,所述第二图像序列对应于第二异常车辆的异常停车事件;
所述方法还包括:
在所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间之前,响应于所述第一异常车辆在所述第一图像序列的第一帧中的检测框与所述第二异常车辆在所述第二图像序列的第一帧中的检测框的交并比大于第三阈值,则确定所述第一异常停车事件和所述第二异常停车事件是同一异常停车事件;
将所述第一图像序列和所述第二图像序列合并成一个图像序列。
可选地,所述方法还包括:
在所述从所述至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像 序列之后,针对每个图像序列,对所述监控视频中位于所述图像序列之前的其他视频帧进行车辆检测;
响应于所述其他视频帧中包括所述图像序列对应的所述同一异常车辆的检测框,则从所述目标图像序列中删除所述同一异常车辆的检测框;
其中,响应于所述其他视频帧中不包括所述图像序列对应的所述同一异常车辆的检测框,则执行所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间的步骤。
可选地,所述方法还包括:
在所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间之前,针对每个图像序列,基于背景图像与视频帧的对应关系,获取所述图像序列对应的多个视频帧;
对所述多个视频帧进行车辆检测;
基于检测结果,将所述多个视频帧中第一次出现所述同一异常车辆的视频帧确定为目标帧;
删除所述图像序列中拍摄时间早于所述目标帧的部分。
可选地,所述方法还包括:
在所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间之后,将起始时间在预设时间内的多个异常停车事件确定为同一异常停车事件;
将所述多个异常停车事件对应的最早起始时间和最晚结束时间,确定为所述同一异常停车事件的起始时间和结束时间。
另一方面,本公开提供一种异常车辆的检测装置,包括:
视频获取单元,用于获取目标道路的监控视频;
背景图像提取单元,用于基于所述监控视频进行背景建模处理,得到所述监控视频中至少部分视频帧的背景图像;
第一车辆检测单元,用于对所述背景图像进行车辆检测处理;
检测框确定单元,用于将所述背景图像中位于所述目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框。
可选地,所述装置还包括:
掩膜提取单元,用于对所述监控视频进行差分掩膜提取处理,得到所述目标道路的掩膜;
所述检测框确定单元包括:
第一交并比计算子单元,用于确定所述背景图像中的检测框与所述掩膜的交并比;
检测框确定子单元,用于将所述交并比大于第一阈值的检测框确定为发生异常停车事件的异常车辆的检测框。
可选地,所述装置还包括:
图像序列构建单元,用于从所述至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列,其中同一图像序列中的背景图像包括同一异常车辆的检测框;
异常停车事件确定单元,用于针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间,其中所述图像序列对应的异常停车事件是指所述图像序列对应的所述同一异常车辆的异常停车事件。
可选地,所述图像序列构建单元包括:
第二交并比计算子单元,用于针对任意两个拍摄间隔小于预设间隔的背景图像,计算其中一个背景图像中的目标检测框与另一个背景图像中的目标检测框的交并比,所述目标检测框是指异常车辆的检测框;
检测框关联子单元,用于响应于所述其中一个背景图像中的第一目标检测框与所述另一个背景图像中的第二目标检测框的交并比大于第二阈值,则确定所述第一目标检测框和所述第二目标检测框是同一异常车辆的检测框;
图像序列组合子单元,用于将所述其中一个背景图像和所述另一个背景图像加入同一图像序列。
可选地,所述至少一个图像序列中包括第一图像序列和第二图像序列,所述第一图像序列对应于第一异常车辆的异常停车事件,所述第二图像序列对应于第二异常车辆的异常停车事件;并且,所述装置还包括:
异常停车事件融合单元,用于响应于所述第一异常车辆在所述第一图像序列的第一帧中的检测框与所述第二异常车辆在所述第二图像序列的第一帧中的检测框的交并比大于第三阈值,则确定所述第一异常停车事件和所述第二异常停车事件是同一异常停车事件;
图像序列合并单元,用于将所述第一图像序列和所述第二图像序列合并成一个图像序列。
可选地,所述装置还包括:第二车辆检测单元,用于针对每个图像序列,对所述监控视频中位于所述图像序列之前的其他视频帧进行车辆检测;
检测框删除单元,用于响应于所述其他视频帧中包括所述图像序列对应的所述同一异常车辆的检测框,则从所述目标图像序列中删除所述同一异常车辆的检测框;
所述异常停车事件确定单元,用于在所述其他视频帧中不包括所述图像序列对应的所述同一异常车辆的检测框时,则执行所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间。
可选地,所述装置还包括:
视频帧获取单元,用于针对每个图像序列,基于背景图像与视频帧的对应关系,获取所述图像序列对应的多个视频帧;
第三车辆检测单元,用于对所述多个视频帧进行车辆检测;
目标帧确定单元,用于基于检测结果,将所述多个视频帧中第一次出现所述同一异常车辆的视频帧确定为目标帧;
删除单元,用于删除所述图像序列中拍摄时间早于所述目标帧的部分。
可选地,所述装置还包括:异常停车事件合并单元,用于将起始时间在预设时间内的多个异常停车事件确定为同一异常停车事件;以及,将所述多个异常停车事件对应的最早起始时间和最晚结束时间,确定为所述同一异常停车事件的起始时间和结束时间。
再一方面,本公开提供一种计算机设备,包括:
存储器;和,耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行如前任一项所述的检测方法。
再一方面,本公开提供一种非瞬时性计算机可读存储介质,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,实现如前任一项所述的检测方法。
再一方面,本公开提供一种计算机程序,包括:指令,所述指令当由处理器执行时使所述处理器执行如前任一项所述的检测方法。
再一方面,本公开提供一种非瞬时性计算机程序产品,包括指令,所述指令当由处理器执行时使所述处理器执行如前任一项所述的检测方法。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本公开实施例提供的一种异常车辆检测系统的结构示意图;
图2是本公开实施例提供的一种异常车辆的检测方法的流程图;
图3是本公开另一实施例提供的异常车辆的检测方法的流程图;
图4是本公开实施例提供的一种监控视频的视频帧图像;
图5是针对图4中的目标道路得到的目标道路的掩膜;
图6是本公开实施例提供的异常车辆的检测方法的部分步骤流程示意图;
图7是本公开一些实施例提供的一种确定图像序列的方法的流程图;
图8是本公开实施例提供的一个图像序列的示意图;
图9是本公开一些实施例提供的异常车辆检测方法的部分流程示意图;
图10是本公开在一些实施例提供的异常车辆的检测方法的部分流程示意图;
图11是本公开一些实施例提供的异常车辆的检测方法的部分流程图;
图12是本公开实施例提供的一种异常车辆的检测装置的结构示意图;
图13是本公开实施例中的一种服务器的结构示意图。
具体实施方式
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
相关技术所采用的方法很容易将一些正常的行驶行为识别为异常停止行为,而且深度学习的方法只能适用于与训练样本同质的场景中,面对未知的场景准确性较差。为了解决 上述技术问题或者至少部分地解决上述技术问题,本公开实施例提供了一种异常车辆的检测方法、装置、设备及存储介质。
图1是本公开实施例提供的一种异常车辆检测系统的结构示意图;如图1所示,异常车辆检测系统100可以包括至少一个图像采集装置101,以及与前述图像采集装置101通信连接的计算机设备102。
图像采集装置101可以安装于目标道路旁侧的设备塔架上或者高层建筑物上,用于对目标道路进行拍摄,得到目标道路的监控视频,其中被图像采集装置101监控的道路称为目标道路,目标道路比如可以是主干道路、车辆交汇路口、交通事故高发路段等,但不局限于这里列举的道路。
计算机设备102用于存储和处理目标道路的监控视频,确定监控视频中发生异常停车事件的异常车辆,其中,异常车辆是指在道路上停车时间超过预设阈值的车辆,比如发生交通事故的车辆等。示例性的,本公开实施中的计算机设备102可以示例性的理解为服务器或者服务器集群等具有存储和图像处理功能的设备。
图2是本公开实施例提供的一种异常车辆的检测方法的流程图,该方法可以示例性的由图1中的计算机设备来执行。如图2所示,在一个实施例中,异常车辆的检测方法可以包括步骤S101-步骤S104。
步骤S101:获取目标道路的监控视频。
目标道路的监控视频可以理解为图像采集装置在某个时间段内采集到的目标道路的视频。目标道路的监控视频包括静止对象和移动对象。静止对象可以包括绿化带、防护栏、车道线和各种道路指示牌,以及在道路上停止时间超过预设阈值的车辆。移动对象可以包括目标道路中的正常行驶的车辆、行人等。
本公开实施例中,计算机设备通过从本地存储器以加载的方式获取目标道路的监控视频,也可以通过从图像采集装置等设备处以下载的方式获取目标道路的监控视频。应当注意的是,本公开实施例获取监控视频的方式并不限于前述的从本地存储器加载,以及从其他设备处下载,还可以是本领域已知的其他方式。
步骤S102:基于监控视频进行背景建模处理,得到监控视频中至少部分视频帧的背景图像。
在监控视频的各个视频帧中,用于表示前述静止对象的像素组成了视频帧的背景图像,用于表示前述移动对象的像素组成了视频帧的前景图像。
基于监控视频进行背景建模处理,得到监控视频中至少部分视频帧的背景图像,是基于监控视频帧中相邻的视频帧的像素进行比较,将相邻视频帧中像素未发生变化的像素确定为背景图像的像素,从而得到背景图像,本实施例中的背景建模方法可以是前向的背景建模方法也可以是后向的背景建模方法,背景建模的方法与相关技术类似,在这里不在赘述。
应当注意的是,本公开实施例采用的背景建模算法并不限于前文提及的前向的或者后向的背景建模算法,也可以采用图像处理领域中已知的各种背景建模算法。
步骤S103:对背景图像进行车辆检测处理。
本公开实施例中,是采用预设的车辆检测模型对背景图像进行车辆检测处理,而得到背景图像中的车辆。背景图像中的车辆即为发生异常停车事件的车辆,由于车辆停止在道路上,因此,在背景建模时,车辆被作为背景保留在背景图像中。
车辆检测模型是采用车辆样本图像训练得到的,用于对图像中的车辆进行识别的模型。在本公开实施例的一个应用中,采用的车辆检测模型可以示例性的理解为Cascader R-CNN模型,但不局限于Cascader R-CNN模型,Cascader R-CNN模型是一个多级目标检测模型,可以避免车辆检测模型训练过程中的过拟合和推理过程中的质量失配问题。
应当注意的,本公开实施例采用的车辆检测模型并不限于Cascader R-CNN模型,在本公开实施例中其他应用中,车辆检测模型也可以是本领域已知的其他模型。
如果采用车辆检测模型检测到背景图像中具有车辆,则其在背景图像上形成一个能够包含车辆的最小尺寸的检测框。在本公开实施例具体应用中,检测框可以是一矩形框。
在本公开一些实施例中,车辆识别模型在识别到车辆的检测框之后,还可以输出该检测框对应为车辆的置信度;实际应用中,可以将预测置信度小于预设阈值的检测框删除,仅保留置信度大于预设阈值的检测框。
步骤S104:将背景图像中位于目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框。
由于背景图像中的对象均为静止对象,如果在背景图像中识出了车辆,则可以确定此车辆为静止车辆。而位于目标道路上的静止车辆可以理解为发生异常停车事件的异常车辆,因此可以将从背景图像中识别出的并且位于目标道路上的车辆的检测框,确定为发生异常停车事件的异常车辆的检测框。
本公开实施例提供的异常车辆的检测方法,通过采用背景建模的方法对获取到的目标 道路的监控视频进行背景建模得到监控视频中的至少部分视频帧的背景图像,再采用车辆识别模型对背景图像进行车辆检测,将背景图像中位于目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框,能够提高异常车辆检测的准确性。
相比于将监控视频输入预先训练好的异常车辆识别模型,直接采用异常车辆识别模型识别视频中的异常车辆的方法,本公开实施例提供的异常车辆检测方法克服了异常车辆样本数量少,异常车辆识别模型对未知场景中异常车辆识别能力较差的问题。
图3是本公开另一实施例提供的异常车辆的检测方法的流程图。如图3所示,在本公开另一些实施例中,异常车辆的检测方法包括步骤S201-步骤S206。其中步骤S201-步骤S203的执行方式和有益效果与前述的步骤S101-S103类似,此处仅就步骤S204-步骤S206做具体的描述,其他步骤可以参照前文叙述。
步骤S201:获取目标道路的监控视频。
步骤S202:基于监控视频进行背景建模处理,得到监控视频中至少部分视频帧的背景图像。
步骤S203:对背景图像进行车辆检测处理。
步骤S204:对监控视频进行差分掩膜提取处理,得到目标道路的掩膜。
掩膜(又称为蒙版)是用于框选图像特定区域的选区。目标道路的掩摸是用于框选监控视频各个视频帧中目标道路的掩摸。
在本公开实施例的一些实际应用中,监控视频的各个视频帧除了包括目标道路区域外,还可以包括与目标道路相邻的建筑物区域。相邻的建筑物区域可能有停车场,而停车场中的车辆不是本实施例所称的异常车辆。
本公开实施例中,通过设置目标道路的掩摸,利用目标道路的掩摸排除目标道路外的车辆检测框,继而避免将目标道路以外的车辆识别为异常车辆。
在本公开一个实施例中,对监控视频进行差分掩摸提取处理,可以包括:选取监控视频中的两个视频帧,比较两个视频帧相同位置处的像素,如果相同位置处的像素发生变化,则将此位置确定为目标道路上的位置,从而通过不断分析比对即可确定得到目标道路的掩摸。
示例性的,在本公开的一个应用中,针对监控视频,可以每隔5帧选取一个视频帧作为确定掩摸的视频帧;比较提取出的相邻的两个视频帧的相同位置处的像素,如果相同位置处的像素差异超过预设阈值,则确定位置为目标道路上的位置。从而通过不断分析比对 即可确定得到目标道路的掩摸。
图4是本公开实施例提供的一种监控视频的视频帧图像;图5是针对图4中的目标道路得到的目标道路的掩膜。如图4所示,视频帧图像400中除了包括目标道路401外,还包括目标道路401旁侧的建筑物402、目标道路401旁侧的绿化带403。如图5所示,通过对监控视频中的视频帧进行比较,可以得到目标道路的掩膜404,该掩膜的区域即为目标道路在视频帧中的区域。
应当注意的是,本公开实施例提供的步骤S204可以在步骤S202-S203之前执行,也可以在步骤S202-S203之后执行,或者可以与步骤S202-S203并行执行。
S205:确定背景图像中的检测框与掩膜的交并比。
为了确定背景图像中检测框与掩摸的交并比,需要首先分别确定背景图像中检测框与掩摸的交集和并集,随后再采用交集和并集的比值作为背景图像中检测框和掩摸的交并比。
S206:将交并比大于第一阈值的检测框确定为发生异常停车事件的异常车辆的检测框。
第一阈值可以根据需要进行设定。如果检测框与目标道路的掩膜的交并比大于第一阈值,则表明检测框至少大部分位于目标道路的掩摸中,也就表明车辆位于目标道路中,可以将车辆的检测框确定为发生异常停车事件的异常车辆的检测框。
采用本申请实施例提供的异常车辆检测方法,利用目标道路的掩摸和背景图像中的检测框确定异常车辆的检测框,可以避免将非目标道路区域的车辆识别为异常车辆。
图6是本公开实施例提供的异常车辆的检测方法的部分步骤流程示意图。如图6所示,在本公开一些实施例中,在将背景图像位于目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框后,即执行前述的步骤S101-S104或者步骤S201-S206后,还可以包括步骤S301-S302。
步骤S301:从至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列。
位于同一图像序列中的背景图像包括同一异常车辆的检测框,并且背景图像按照对应的视频帧的拍摄时间进行排序。
图7是本公开一些实施例提供一种确定图像序列的方法的流程图。如图7所示,本公开一些实施例中,图像序列的确定方法可以包括步骤S3011-S3013。
S3011:针对任意两个拍摄间隔小于预设间隔的背景图像,计算其中一个背景图像中的目标检测框与另一个背景图像中的目标检测框的交并比。
目标检测框是指表示背景图像中异常车辆的检测框。
预设间隔可以根据需要进行设定,本实施例不做具体限定。如果两个背景图像的拍摄间隔小于预设间隔,则可以将两个背景图像中的目标检测框进行交并比计算;如果两个背景图像的拍摄间隔大于或者等于预设间隔,则不对这两个背景图像中的目标检测框进行交并比计算。
在步骤S3011中如果两个背景图像中的目标检测框的交并比越大,表明两个背景图像中的目标检测框的位置重合度越高;交并比越小,表明两个背景图像中的目标检测框的位置重合度越小。
S3012:响应于其中一个背景图像中的第一目标检测框与另一个背景图像中的第二目标检测框的交并比大于第二阈值,则确定第一目标检测框和第二目标检测框是同一异常车辆的检测框。
第二阈值是用于判定两个背景图像中目标检测框是否为同一异常车辆的检测框的数值,第二阈值可以根据需要进行设定。
如果两个背景图像的拍摄间隔小于预设间隔,并且一个背景图像中的第一目标检测框与另一背景图像中第二目标检测框的交并比大于第二阈值,则表征两个目标检测框为同一异常车辆的检测框,否则确定这两个目标检测框不是同一个车辆的检测框。
S3013:将所述其中一个背景图像和所述另一个背景图像加入同一图像序列。
在确定两个背景图像中某两个目标检测框为同一异常车辆的检测框后,则可以将两个背景图像放入到同一图像序列中,以使该图像序列对应于该同一异常车辆的异常停车事件。
图8是本公开实施例提供的一个图像序列的示意图。其中,图8中的每一个帧分别表示了图像序列中的一个背景图像,并且每一帧中均具有同一个异常车辆的检测框;通过将多个背景图像按照时间顺序排列,并将多个背景图像中的检测框进行连接,可以表示出一个异常车辆发生异常停车事件的时空管,通过时空管可以形象地示出异常车辆从发生异常停车事件到异常停车事件结束的过程。
应当注意的是,图8所示的示意图仅是一个示例,在本公开实施例提供的异常车辆的检测方法过程中,可能并不会采用图8所示的方法展示图像序列,而是采用数组的形式表示图像序列;示例性的,数组中可以包括同一异常车辆的检测框在各背景图像中的顶点坐标。
可以理解的是,如果监控视频中包括多个异常车辆,则基于步骤S3011-S3013,可以 得到多个异常车辆对应的多个图像序列。随后可以执行步骤S302。
步骤S302:针对每个图像序列,将图像序列的起始时间和结束时间,确定为图像序列对应的异常停车事件的起始时间和结束时间。
因为每个图像序列均对应一个异常车辆的异常停车事件,则将每个图像序列的起始图像帧对应的时间作为图像序列的起始时间,将每个图像序列的结束图像帧对应的时间作为图像序列的结束时间,则可以确定异常车辆的异常停车事件的起始时间和结束时间。
在本公开一些实施例中,步骤S301确定的至少一个图像序列中可以包括第一图像序列和第二图像序列。其中,第一图像序列对应于第一异常车辆的异常停车事件,第二图像序列对应第二异常车辆的异常停车事件。
图9是本公开一些实施例提供的异常车辆检测方法的部分流程示意图,如图9所示,本公开一些实施例提供的异常车辆的检测方法除了包括前述的步骤S301和S302外,在步骤S302之前还可以包括步骤S303-S304。
S303:响应于第一异常车辆在第一图像序列的第一帧中的检测框与第二异常车辆在第二图像序列的第一帧中的检测框的交并比大于第三阈值,则确定第一异常停车事件和第二异常停车事件是同一异常停车事件。
第三阈值是用于判定第一异常车辆和第二异常车辆是否发生碰撞、剐蹭等事故的阈值。
如果第一异常车辆在第一图像序列中的第一帧的检测框和第二异常车辆在第二图像序列中的第一帧的检测框的交并比大于第三阈值,则表明第一异常车辆和第二异常车辆出现了碰撞、剐蹭事故,二者是因为同一事故而出现异常停车,因此可以将第一异常停车事件和第二异常停车事件确定为同一异常停车事件。
S304:将第一图像序列和第二图像序列合并成一个图像序列。
在第一异常停车事件和第二异常停车事件是同一异常停车事件时,可以将第一图像序列和第二图像序列合并为一个图像序列。合并后的图像序列对应于前述所称的同一异常停车事件。
具体的,将第一图像序列和第二图像序列合并为同一图像序列,可以是求取第一图像序列和第二图像序列的并集,并按照图像序列中各背景图像的拍摄顺序对并集中的背景图像进行排序,得到合并后的图像序列。
通过将第一图像序列和第二图像序列合并为同一图像序列,可以使得将同一交通事故的造成的各个异常停车事件作为同一异常停车事件处理,进而方便后续分析异常停车事件 的原因和异常停车事件的发生过程。
实际应用中,车辆检测模型可能会将井盖、路牌等静止物体识别为车辆,并根据此类静止物体的检测框构建了图像序列,而此类图像序列并不是真正的用于表征异常停车事件的图像序列。针对这种情况本公开实施例还提供了一种方法用于排除这部分误检。
示例的,图10是本公开在一些实施例提供的异常车辆的检测方法的部分流程示意图。如图10所示,为消除误检,在本公开的一些实施例中,在执行前述的步骤S301后,还可以包括步骤S305-步骤S306。
步骤S305:针对每个图像序列,对监控视频中位于图像序列之前的其他视频帧进行车辆检测。
步骤S305执行时,可以采用前述步骤S103中提及的车辆检测模型,对监控视频中位于图像序列之前的其他视频帧进行车辆检测,以得到其他视频帧中的车辆的检测框。
步骤S306:响应于其他视频帧中包括图像序列对应的同一异常车辆的检测框,则从目标图像序列中删除同一异常车辆的检测框。
在步骤S306中,需要判断其他视频帧中是否包括图像序列所对应的异常车辆的检测框;如果所述其他视频帧中的车辆的检测框与图像序列所对应的异常车辆的检测框的交并比大于预设阈值,则可以判定所述其他视频帧中包括该图像序列所对应的异常车辆的检测框。
在其他视频帧包括图像序列所对应的异常车辆的检测框的情况下,说明该检测框对应的静止对象一直都存在,其并不是异常车辆的检测框而可能是将路牌等禁止物体误检为车辆,因此可以从目标图像序列中删除该检测框,以消除误检。
在异常车辆的检测方法包括步骤S305的情况下,前述的步骤S302具体可以为步骤S3021。
步骤S3021:响应于其他视频帧中不包括图像序列对应的同一异常车辆的检测框,则执行针对每个图像序列,将图像序列的起始时间和结束时间,确定为图像序列对应的异常停车事件的起始时间和结束时间的步骤。
也就是说,如果其他视频帧不包括图像序列所对应的异常车辆的检测框,则可以直接将图像序列的起始时间和结束时间,作为该异常车辆异常停车事件的起始时间和结束时间。
通过执行步骤S305-S306,本公开实施例提供的异常车辆的检测方法可以识别出误检形成的非异常车辆的图像序列,得到更为准确的表征异常车辆的异常停车事件的图像序列。
在本公开的一些实施例提供的异常车辆的检测方法中,如果采用后向的背景建模方法进行背景建模,由于建模算法的原因可能导致异常车辆在监控视频帧对应的背景图像中出现的时间提前,而使得异常车辆的异常停车事件对应的图像序列的起始时间提前。为了解决这一问题,本公开实施例还提供了一种方法来校准异常停车事件的起始时间。
图11是本公开一些实施例提供的异常车辆的检测方法的部分流程图。如图11所示,在本公开的一些实施例中,为了避免异常车辆的异常停车事件对应的图像序列的起始时间相比于真实时间提前,在上述步骤S302之前还可以包括步骤S307-S310。
步骤S307:针对每个图像序列,基于背景图像与视频帧的对应关系,获取图像序列对应的多个视频帧。
在本实施例中每个背景图像均对应监控视频中的一个原始视频帧,所以根据背景图像和视频帧的一一对应关系,可以查找到图像序列中背景图像对应的多个视频帧。
步骤S308:对多个视频帧进行车辆检测。
步骤S308执行时,可以采用前文中提及的车辆检测模型,对图像序列中背景图像对应的视频帧进行车辆检测,以识别出对应视频帧中车辆的检测框。
步骤S309:基于检测结果,将多个视频帧中第一次出现同一异常车辆的视频帧确定为目标帧。
假如,基于图像序列中的5个背景图像确定出对应的5个视频帧,假设在第2个视频帧中第一次出现图像序列对应的异常车辆的检测框,则确定第2个视频帧为目标帧。
步骤S310:删除图像序列中拍摄时间早于目标帧的部分。
由于在目标帧中第一次出现了异常车辆的检测框,也就可以确定在目标帧开始才出现了异常车辆的异常停车事件,因此可以删除图像序列中拍摄时间早于目标帧的部分,提高异常停车事件的起始时间的准确性。
比如在步骤S309的例子中,第2帧是目标帧,则相应的删除图像序列中的第一帧图像,删除后图像序列包括4个背景图像。
在本公开一些实施例中,异常车辆的检测方法在步骤S302之后,还可以包括步骤S311-步骤S312。
步骤S311:将起始时间在预设时间内的多个异常停车事件确定为同一异常停车事件。
实际中,如果目标道路上出现交通事故而产生异常停车事件,后续在预设时间内可能因为在前的异常停车事件而造成道路拥堵等问题,使得出现在后的车辆也发生异常停车事 件。因为在后的异常停车事件与在前的异常停车事件具有关联关系,可以将在后的异常停车事件和在前的异常停车事件认定为同一异常停车事件;因此,为了方便对异常停车时间进行分析,本实施例可以将起始时间在预设时间内的多个异常停车事件确定为同一异常停车事件。
步骤S312:将多个异常停车事件对应的最早起始时间和最晚结束时间,确定为同一异常停车事件的起始时间和结束时间。
因为多个异常停车事件可以作为同一异常停车事件,所以可以将多个异常停车事件对应的最早起始事件作为同一异常停车事件的起始时间,而将多个异常停车事件对应的最晚结束事件作为同一异常停车事件的结束时间。
图12是本公开实施例提供的一种异常车辆的检测装置的结构示意图,该检测装置可以被理解为上述计算机设备或者计算机设备中的部分功能模块。如图12所示,该检测装置1200包括:视频获取单元1201、背景图像提取单元1202、第一车辆检测单元1203和检测框确定单元1204。
视频获取单元1201用于获取目标道路的监控视频;背景图像提取单元1202用于基于监控视频进行背景建模处理,得到监控视频中至少部分视频帧的背景图像;第一车辆检测单元1203用于对背景图像进行车辆检测处理;检测框确定单元1204用于将背景图像中位于目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框。
在本公开的一些实施例中,还包括掩膜提取单元;掩膜提取单元用于对监控视频进行差分掩膜提取处理,得到目标道路的掩膜;对应的,检测框确定单元包括第一交并比计算子单元和检测框确定子单元;第一交并比计算子单元用于确定背景图像中的检测框与掩膜的交并比;检测框确定子单元用于将交并比大于第一阈值的检测框确定为发生异常停车事件的异常车辆的检测框。
在本公开一些实施例中,异常车辆的检测装置还包括图像序列构建单元和异常停车事件确定单元。
图像序列构建单元用于从至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列,其中同一图像序列中的背景图像包括同一异常车辆的检测框;异常停车事件确定单元用于针对每个图像序列,将图像序列的起始时间和结束时间,确定为图像序列对应的异常停车事件的起始时间和结束时间,其中图像序列对应的异常停车事件是指图像序列对应的同一异常车辆的异常停车事件。
在本公开的一些实施例中,图像序列构建单元可以包括第二交并比计算子单元、检测框关联子单元和的图像序列组合子单元;第二交并比计算子单元用于针对任意两个拍摄间隔小于预设间隔的背景图像,计算其中一个背景图像中的目标检测框与另一个背景图像中的目标检测框的交并比,目标检测框是指异常车辆的检测框;检测框关联子单元用于响应于其中一个背景图像中的第一目标检测框与另一个背景图像中的第二目标检测框的交并比大于第二阈值,则确定第一目标检测框和第二目标检测框是同一异常车辆的检测框;图像序列组合子单元用于将其中一个背景图像和另一个背景图像加入同一图像序列。
在本公开的一些实施例中,至少一个图像序列中包括第一图像序列和第二图像序列,第一图像序列对应于第一异常车辆的异常停车事件,第二图像序列对应于第二异常车辆的异常停车事件;在此情况下,异常车辆的检测装置还包括:异常停车事件融合单元和图像序列组合单元。
异常停车事件融合单元,用于响应于第一异常车辆在第一图像序列的第一帧中的检测框与第二异常车辆在第二图像序列的第一帧中的检测框的交并比大于第三阈值,则确定第一异常停车事件和第二异常停车事件是同一异常停车事件;图像序列合并单元,用于将第一图像序列和第二图像序列合并成一个图像序列。
在本公开一些实施例中,所述装置还包括第二车辆检测单元,第二车辆检测单元用于针对每个图像序列,对监控视频中位于图像序列之前的其他视频帧进行车辆检测;对应的,异常车辆的检测装置还包括检测框删除单元和异常停车事件确定单元。
检测框删除单元用于响应于其他视频帧中包括图像序列对应的同一异常车辆的检测框,则从目标图像序列中删除同一异常车辆的检测框;异常停车事件确定单元用于在其他视频帧中不包括图像序列对应的同一异常车辆的检测框时,执行针对每个图像序列,将图像序列的起始时间和结束时间,确定为图像序列对应的异常停车事件的起始时间和结束时间。
在本公开一些实施例中,异常车辆的检测装置还包括视频帧获取单元、第三车辆检测单元、目标帧确定单元和删除单元:视频帧获取单元用于针对每个图像序列,基于背景图像与视频帧的对应关系,获取图像序列对应的多个视频帧;第三车辆检测单元用于对多个视频帧进行车辆检测;目标帧确定单元用于基于检测结果,将多个视频帧中第一次出现同一异常车辆的视频帧确定为目标帧;删除单元用于删除图像序列中拍摄时间早于目标帧的部分。
在本公开一些实施例中,异常车辆的检测装置还包括异常停车事件合并单元,用于将起始时间在预设时间内的多个异常停车事件确定为同一异常停车事件;以及,将多个异常停车事件对应的最早起始时间和最晚结束时间,确定为同一异常停车事件的起始时间和结束时间。
本实施例提供的装置能够执行上述图1-图11中任一实施例的方法,其执行方式和有益效果类似,在这里不再赘述。
本公开实施例还提供一种计算机设备,该计算机设备包括处理器和存储器,其中,存储器中存储有计算机程序,当计算机程序被处理器执行时可以实现上述图1-图11中任一实施例的方法。
示例的,图13是本公开实施例中的一种计算机设备的结构示意图。下面具体参考图13,其示出了适于用来实现本公开实施例中的计算机设备1000的结构示意图。
如图13所示,计算机设备1300可以包括处理装置1301(例如中央处理器、图形处理器等),其可以根据存储在只读存储器ROM1302中的程序或者从存储装置1308加载到随机访问存储器RAM1303中的程序而执行各种适当的动作和处理。在RAM 1303中,还存储有计算机设备1300操作所需的各种程序和数据。处理装置1301、ROM 1302以及RAM1303通过总线1304彼此相连。输入/输出(I/O)接口1305也连接至总线1304。
通常,以下装置可以连接至I/O接口1305:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置1306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置1307;包括例如磁带、硬盘等的存储装置1308;以及通信装置1309。通信装置1309可以允许计算机设备1300与其他设备进行无线或有线通信以交换数据。虽然图13示出了具有各种装置的计算机设备1300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置1309从网络上被下载和安装,或者从存储装置1308被安装,或者从ROM 1302被安装。在该计算机程序被处理装置1301执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机 可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、计算机设备可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述计算机设备中所包含的;也可以是单独存在,而未装配入该计算机设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该计算机设备执行时,使得该计算机设备:获取目标道路的监控视频;基于监控视频进行背景建模处理,得到监控视频中至少部分视频帧的背景图像;对背景图像进行车辆检测处理;将背景图像中位于目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包 执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或计算机设备上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
本公开实施例还提供一种计算机可读存储介质,存储介质中存储有计算机程序,当计算机程序被处理器执行时可以实现上述图1-图11中任一实施例的方法,其执行方式和有 益效果类似,在这里不再赘述。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (20)

  1. 一种异常车辆的检测方法,包括:
    获取目标道路的监控视频;
    基于所述监控视频进行背景建模处理,得到所述监控视频中至少部分视频帧的背景图像;
    对所述背景图像进行车辆检测处理;
    将所述背景图像中位于所述目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框。
  2. 根据权利要求1所述的方法,还包括:
    在所述获取目标道路的监控视频之后,对所述监控视频进行差分掩膜提取处理,得到所述目标道路的掩膜;
    其中,所述将所述背景图像中位于所述目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框,包括:
    确定所述背景图像中的检测框与所述掩膜的交并比;
    将所述交并比大于第一阈值的检测框确定为发生异常停车事件的异常车辆的检测框。
  3. 根据权利要求1或2所述的方法,还包括:
    所述将所述背景图像中位于所述目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框之后,从所述至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列,其中同一图像序列中的背景图像包括同一异常车辆的检测框;
    针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间,其中所述图像序列对应的异常停车事件是指所述图像序列对应的所述同一异常车辆的异常停车事件。
  4. 根据权利要求3所述的方法,其中,所述从所述至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列,包括:
    针对任意两个拍摄间隔小于预设间隔的背景图像,计算其中一个背景图像中的目标检测框与另一个背景图像中的目标检测框的交并比,所述目标检测框是指异常车辆的检测框;
    响应于所述其中一个背景图像中的第一目标检测框与所述另一个背景图像中的第二目标检测框的交并比大于第二阈值,则确定所述第一目标检测框和所述第二目标检测框是同一异常车辆的检测框;
    将所述其中一个背景图像和所述另一个背景图像加入同一图像序列。
  5. 根据权利要求3所述的方法,其中,所述至少一个图像序列中包括第一图像序列和第二图像序列,所述第一图像序列对应于第一异常车辆的异常停车事件,所述第二图像序列对应于第二异常车辆的异常停车事件;
    所述方法还包括:
    所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间之前,响应于所述第一异常车辆在所述第一图像序列的第一帧中的检测框与所述第二异常车辆在所述第二图像序列的第一帧中的检测框的交并比大于第三阈值,则确定所述第一异常停车事件和所述第二异常停车事件是同一异常停车事件;
    将所述第一图像序列和所述第二图像序列合并成一个图像序列。
  6. 根据权利要求3所述的方法,还包括:
    在所述从所述至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列之后,针对每个图像序列,对所述监控视频中位于所述图像序列之前的其他视频帧进行车辆检测;
    响应于所述其他视频帧中包括所述图像序列对应的所述同一异常车辆的检测框,则从所述目标图像序列中删除所述同一异常车辆的检测框;
    其中,响应于所述其他视频帧中不包括所述图像序列对应的所述同一异常车辆的检测框,则执行所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间的步骤。
  7. 根据权利要求3所述的方法,还包括:
    在所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间之前,针对每个图像序列,基于背景图像与视频帧的对应关系,获取所述图像序列对应的多个视频帧;
    对所述多个视频帧进行车辆检测;
    基于检测结果,将所述多个视频帧中第一次出现所述同一异常车辆的视频帧确定为目标帧;
    删除所述图像序列中拍摄时间早于所述目标帧的部分。
  8. 根据权利要求3所述的方法,还包括:
    在所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间之后,将起始时间在预设时间内的多个异常停车事件确定为同一异常停车事件;
    将所述多个异常停车事件对应的最早起始时间和最晚结束时间,确定为所述同一异常停车事件的起始时间和结束时间。
  9. 一种异常车辆的检测装置,包括:
    视频获取单元,用于获取目标道路的监控视频;
    背景图像提取单元,用于基于所述监控视频进行背景建模处理,得到所述监控视频中至少部分视频帧的背景图像;
    第一车辆检测单元,用于对所述背景图像进行车辆检测处理;
    检测框确定单元,用于将所述背景图像中位于所述目标道路上的检测框确定为发生异常停车事件的异常车辆的检测框。
  10. 根据权利要求9所述的装置,还包括:
    掩膜提取单元,用于对所述监控视频进行差分掩膜提取处理,得到所述目标道路的掩膜;
    所述检测框确定单元包括:
    第一交并比计算子单元,用于确定所述背景图像中的检测框与所述掩膜的交并比;
    检测框确定子单元,用于将所述交并比大于第一阈值的检测框确定为发生异常停车事件的异常车辆的检测框。
  11. 根据权利要求9或10所述的装置,还包括:
    图像序列构建单元,用于从所述至少部分视频帧的背景图像中,确定出至少一个由背景图像构成的图像序列,其中同一图像序列中的背景图像包括同一异常车辆的检测框;
    异常停车事件确定单元,用于针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间,其中所述图像序列对应的异常停车事件是指所述图像序列对应的所述同一异常车辆的异常停车事件。
  12. 根据权利要求11所述的装置,其中,所述图像序列构建单元包括:
    第二交并比计算子单元,用于针对任意两个拍摄间隔小于预设间隔的背景图像,计算其中一个背景图像中的目标检测框与另一个背景图像中的目标检测框的交并比,所述目标检测框是指异常车辆的检测框;
    检测框关联子单元,用于响应于所述其中一个背景图像中的第一目标检测框与所述另一个背景图像中的第二目标检测框的交并比大于第二阈值,则确定所述第一目标检测框和所述第二目标检测框是同一异常车辆的检测框;
    图像序列组合子单元,用于将所述其中一个背景图像和所述另一个背景图像加入同一图像序列。
  13. 根据权利要求11所述的装置,其中,所述至少一个图像序列中包括第一图像序列和第二图像序列,所述第一图像序列对应于第一异常车辆的异常停车事件,所述第二图像序列对应于第二异常车辆的异常停车事件;并且,所述装置还包括:
    异常停车事件融合单元,用于响应于所述第一异常车辆在所述第一图像序列的第一帧中的检测框与所述第二异常车辆在所述第二图像序列的第一帧中的检测框的交并比大于第三阈值,则确定所述第一异常停车事件和所述第二异常停车事件是同一异常停车事件;
    图像序列合并单元,用于将所述第一图像序列和所述第二图像序列合并成一个图像序列。
  14. 根据权利要求11所述的装置,还包括:
    第二车辆检测单元,用于针对每个图像序列,对所述监控视频中位于所述图像序列之前的其他视频帧进行车辆检测;
    检测框删除单元,用于响应于所述其他视频帧中包括所述图像序列对应的所述同一异常车辆的检测框,则从所述目标图像序列中删除所述同一异常车辆的检测框;
    异常停车事件确定单元,用于在所述其他视频帧中不包括所述图像序列对应的所述同一异常车辆的检测框时,执行所述针对每个图像序列,将所述图像序列的起始时间和结束时间,确定为所述图像序列对应的异常停车事件的起始时间和结束时间。
  15. 根据权利要求11所述的装置,还包括:
    视频帧获取单元,用于针对每个图像序列,基于背景图像与视频帧的对应关系,获取所述图像序列对应的多个视频帧;
    第三车辆检测单元,用于对所述多个视频帧进行车辆检测;
    目标帧确定单元,用于基于检测结果,将所述多个视频帧中第一次出现所述同一异常车辆的视频帧确定为目标帧;
    删除单元,用于删除所述图像序列中拍摄时间早于所述目标帧的部分。
  16. 根据权利要求11所述的装置,还包括:
    异常停车事件合并单元,用于将起始时间在预设时间内的多个异常停车事件确定为同一异常停车事件;以及,将所述多个异常停车事件对应的最早起始时间和最晚结束时间,确定为所述同一异常停车事件的起始时间和结束时间。
  17. 一种计算机设备,包括:
    存储器;和
    耦接至所述存储器的处理器,所述处理器被配置为基于存储在所述存储器中的指令,执行如权利要求1-8中任一项所述的检测方法。
  18. 一种非瞬时性计算机可读存储介质,其中,所述存储介质中存储有计算机程序,当所述计算机程序被处理器执行时,实现如权利要求1-8中任一项所述的检测方法。
  19. 一种计算机程序,包括:
    指令,所述指令当由处理器执行时使所述处理器执行根据权利要求1-8中任一项所述的检测方法。
  20. 一种非瞬时性计算机程序产品,包括指令,所述指令当由处理器执行时使所述处理器执行根据1-8中任一项所述的检测方法。
PCT/CN2022/091653 2021-06-16 2022-05-09 异常车辆的检测方法、装置、设备及存储介质 WO2022262471A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/262,806 US20240071215A1 (en) 2021-06-16 2022-05-09 Detection method and apparatus of abnormal vehicle, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110667910.5A CN113409587B (zh) 2021-06-16 2021-06-16 异常车辆的检测方法、装置、设备及存储介质
CN202110667910.5 2021-06-16

Publications (1)

Publication Number Publication Date
WO2022262471A1 true WO2022262471A1 (zh) 2022-12-22

Family

ID=77684426

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/091653 WO2022262471A1 (zh) 2021-06-16 2022-05-09 异常车辆的检测方法、装置、设备及存储介质

Country Status (3)

Country Link
US (1) US20240071215A1 (zh)
CN (1) CN113409587B (zh)
WO (1) WO2022262471A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450880A (zh) * 2023-05-11 2023-07-18 湖南承希科技有限公司 一种语义检测的车载视频智能处理方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409587B (zh) * 2021-06-16 2022-11-22 北京字跳网络技术有限公司 异常车辆的检测方法、装置、设备及存储介质
CN114005074B (zh) * 2021-12-30 2022-04-12 以萨技术股份有限公司 交通事故的确定方法、装置及电子设备
CN114049771A (zh) * 2022-01-12 2022-02-15 华砺智行(武汉)科技有限公司 基于双模态的交通异常检测方法、系统和存储介质
CN114332826B (zh) * 2022-03-10 2022-07-08 浙江大华技术股份有限公司 一种车辆图像识别方法、装置、电子设备和存储介质
CN114820691B (zh) * 2022-06-28 2022-11-15 苏州魔视智能科技有限公司 本车运动状态检测方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140348390A1 (en) * 2013-05-21 2014-11-27 Peking University Founder Group Co., Ltd. Method and apparatus for detecting traffic monitoring video
CN109285341A (zh) * 2018-10-31 2019-01-29 中电科新型智慧城市研究院有限公司 一种基于实时视频的城市道路车辆异常停驶检测方法
CN110705461A (zh) * 2019-09-29 2020-01-17 北京百度网讯科技有限公司 一种图像处理方法及装置
CN111832492A (zh) * 2020-07-16 2020-10-27 平安科技(深圳)有限公司 静态交通异常的判别方法、装置、计算机设备及存储介质
CN112200131A (zh) * 2020-10-28 2021-01-08 鹏城实验室 一种基于视觉的车辆碰撞检测方法、智能终端及存储介质
CN113361299A (zh) * 2020-03-03 2021-09-07 浙江宇视科技有限公司 一种异常停车的检测方法、装置、存储介质及电子设备
CN113409587A (zh) * 2021-06-16 2021-09-17 北京字跳网络技术有限公司 异常车辆的检测方法、装置、设备及存储介质

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2578464B1 (en) * 2011-10-06 2014-03-19 Honda Research Institute Europe GmbH Video-based warning system for a vehicle
CN106778540B (zh) * 2013-03-28 2019-06-28 南通大学 停车检测准确的基于双层背景的停车事件检测方法
CN104376554B (zh) * 2014-10-16 2017-07-18 中海网络科技股份有限公司 一种基于图像纹理的违章停车检测方法
CN105160326A (zh) * 2015-09-15 2015-12-16 杭州中威电子股份有限公司 一种高速公路上停车的自动检测方法与装置
CN105868700A (zh) * 2016-03-25 2016-08-17 哈尔滨工业大学深圳研究生院 一种基于监控视频的车型识别与跟踪方法及系统
CN107292239A (zh) * 2017-05-24 2017-10-24 南京邮电大学 一种基于三重背景更新的违章停车检测方法
CN107491753A (zh) * 2017-08-16 2017-12-19 电子科技大学 一种基于背景建模的违章停车检测方法
CN109934075A (zh) * 2017-12-19 2019-06-25 杭州海康威视数字技术股份有限公司 异常事件检测方法、装置、系统及电子设备
CN108335489A (zh) * 2018-03-11 2018-07-27 西安电子科技大学 高速公路车辆行为语义分析及异常行为监控系统及方法
CN110705495A (zh) * 2019-10-10 2020-01-17 北京百度网讯科技有限公司 交通工具的检测方法、装置、电子设备和计算机存储介质
CN112749596A (zh) * 2019-10-31 2021-05-04 顺丰科技有限公司 异常画面检测方法、装置、电子设备和存储介质
CN111368687B (zh) * 2020-02-28 2022-07-19 成都市微泊科技有限公司 一种基于目标检测和语义分割的人行道车辆违停检测方法
CN111369807B (zh) * 2020-03-24 2022-04-12 北京百度网讯科技有限公司 一种交通事故的检测方法、装置、设备和介质
CN111696135A (zh) * 2020-06-05 2020-09-22 深兰人工智能芯片研究院(江苏)有限公司 基于交并比的违禁停车检测方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140348390A1 (en) * 2013-05-21 2014-11-27 Peking University Founder Group Co., Ltd. Method and apparatus for detecting traffic monitoring video
CN109285341A (zh) * 2018-10-31 2019-01-29 中电科新型智慧城市研究院有限公司 一种基于实时视频的城市道路车辆异常停驶检测方法
CN110705461A (zh) * 2019-09-29 2020-01-17 北京百度网讯科技有限公司 一种图像处理方法及装置
CN113361299A (zh) * 2020-03-03 2021-09-07 浙江宇视科技有限公司 一种异常停车的检测方法、装置、存储介质及电子设备
CN111832492A (zh) * 2020-07-16 2020-10-27 平安科技(深圳)有限公司 静态交通异常的判别方法、装置、计算机设备及存储介质
CN112200131A (zh) * 2020-10-28 2021-01-08 鹏城实验室 一种基于视觉的车辆碰撞检测方法、智能终端及存储介质
CN113409587A (zh) * 2021-06-16 2021-09-17 北京字跳网络技术有限公司 异常车辆的检测方法、装置、设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SUI, JING: "TRAFFIC ANOMALY DETECTION BASED ON MOVING OBJECT TRAJECTORY", CHINESE SELECTED DOCTORAL DISSERTATIONS AND MASTER'S THESES FULL-TEXT DATABASES (MASTER), ENGINEERING SCIENCE & TECHNOLOGY, vol. 35, no. 1, January 2018 (2018-01-01), pages 246 - 252, XP093015287, ISSN: 1674-0246 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450880A (zh) * 2023-05-11 2023-07-18 湖南承希科技有限公司 一种语义检测的车载视频智能处理方法
CN116450880B (zh) * 2023-05-11 2023-09-01 湖南承希科技有限公司 一种语义检测的车载视频智能处理方法

Also Published As

Publication number Publication date
CN113409587B (zh) 2022-11-22
US20240071215A1 (en) 2024-02-29
CN113409587A (zh) 2021-09-17

Similar Documents

Publication Publication Date Title
WO2022262471A1 (zh) 异常车辆的检测方法、装置、设备及存储介质
CN111626208B (zh) 用于检测小目标的方法和装置
WO2020248386A1 (zh) 视频分析方法、装置、计算机设备及存储介质
US20230394671A1 (en) Image segmentation method and apparatus, and device, and storage medium
CN110889351B (zh) 视频检测方法、装置、终端设备及可读存储介质
US10803324B1 (en) Adaptive, self-evolving learning and testing platform for self-driving and real-time map construction
CN107316006A (zh) 一种道路障碍物检测的方法和系统
JP6700373B2 (ja) ビデオ動画の人工知能のための学習対象イメージパッケージング装置及び方法
CN110942629A (zh) 道路交通事故管理方法、装置及终端设备
WO2020246655A1 (ko) 상황 인지 방법 및 이를 수행하는 장치
CN110910415A (zh) 抛物检测方法、装置、服务器和计算机可读介质
WO2021147055A1 (en) Systems and methods for video anomaly detection using multi-scale image frame prediction network
US10922826B1 (en) Digital twin monitoring systems and methods
US20180268247A1 (en) System and method for detecting change using ontology based saliency
CN111382695A (zh) 用于检测目标的边界点的方法和装置
CN108346294B (zh) 车辆识别系统、方法和装置
CN112434753A (zh) 模型训练方法、目标检测方法、装置、设备及存储介质
CN115766401B (zh) 工业告警信息解析方法、装置、电子设备与计算机介质
JP6681965B2 (ja) 自律走行のための学習対象イメージ抽出装置及び方法
CN116923372A (zh) 驾驶控制方法、装置、设备及介质
CN114863690B (zh) 一种实线变道的识别方法、装置、介质及电子设备
WO2022142172A1 (zh) 一种检测近场物体的方法、装置、介质和电子设备
Algiriyage et al. Traffic Flow Estimation based on Deep Learning for Emergency Traffic Management using CCTV Images.
CN114627400A (zh) 一种车道拥堵检测方法、装置、电子设备和存储介质
CN112434644A (zh) 车辆图像的处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823954

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18262806

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE