CN110838230B - Mobile video monitoring method, monitoring center and system - Google Patents

Mobile video monitoring method, monitoring center and system Download PDF

Info

Publication number
CN110838230B
CN110838230B CN201911122143.9A CN201911122143A CN110838230B CN 110838230 B CN110838230 B CN 110838230B CN 201911122143 A CN201911122143 A CN 201911122143A CN 110838230 B CN110838230 B CN 110838230B
Authority
CN
China
Prior art keywords
law enforcement
vehicle
video
accident
mobile
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911122143.9A
Other languages
Chinese (zh)
Other versions
CN110838230A (en
Inventor
侯宇红
朱开印
杨林赐
张鹏岩
丁凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911122143.9A priority Critical patent/CN110838230B/en
Publication of CN110838230A publication Critical patent/CN110838230A/en
Application granted granted Critical
Publication of CN110838230B publication Critical patent/CN110838230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/20Monitoring the location of vehicles belonging to a group, e.g. fleet of vehicles, countable or determined number of vehicles
    • G08G1/202Dispatching vehicles on the basis of a location, e.g. taxi dispatching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0108Measuring and analyzing of parameters relative to traffic conditions based on the source of data
    • G08G1/0112Measuring and analyzing of parameters relative to traffic conditions based on the source of data from the vehicle, e.g. floating car data [FCD]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/20Monitoring the location of vehicles belonging to a group, e.g. fleet of vehicles, countable or determined number of vehicles
    • G08G1/205Indicating the location of the monitored vehicles as destination, e.g. accidents, stolen, rental
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Abstract

The application discloses a mobile video monitoring method, a monitoring center and a system. The monitoring system comprises a mobile video monitoring center, at least one mobile law enforcement vehicle and at least one law enforcement assistance vehicle; the method comprises the following steps that a mobile law enforcement vehicle collects a real-time law enforcement video and uploads the real-time law enforcement video and the real-time geographic position of the mobile law enforcement vehicle to a monitoring center; uploading the real-time geographical position of the law enforcement assistance vehicle and the current idle state of the law enforcement assistance vehicle to a monitoring center by the law enforcement assistance vehicle; the monitoring center performs video analysis on the real-time law enforcement video to obtain an accident handling result, determines an optimal law enforcement assisting vehicle according to the accident handling result and the real-time geographic position, and sends a rescue instruction to the optimal law enforcement assisting vehicle. By adopting the mobile video monitoring system provided by the application, the law enforcement rescue vehicle can be determined through videos, the rescue instruction is directly sent to the law enforcement rescue vehicle, the time consumed by intermediate links is reduced, the traffic paralysis problem is reduced, and injured people can be treated in time.

Description

Mobile video monitoring method, monitoring center and system
Technical Field
The present application relates to the field of video monitoring, and in particular, to a mobile video monitoring method, a monitoring center, and a system.
Background
The urban public transport system is an important subsystem of the urban social economic system, the development level of the urban public transport system is an important mark for measuring the national modernization degree, and the urban public transport system also reflects the national comprehensive economic strength, the urban economic development and the living level of urban residents. With the continuous expansion of the urbanization of the country and the continuous increase of the population, the pressure of urban traffic is increased, and the video monitoring is also developed from the conventional fixed monitoring point to the mobile video monitoring.
The mobile video monitoring system is characterized in that a mobile terminal checks urban condition videos acquired by the monitoring system in real time through a wireless network. Such as vehicle-mounted mobile monitoring of city management law enforcement, social security mobile command vehicles and the like, the real-time relevant video resources of the vehicle-mounted camera are required to be transmitted back to the monitoring center.
At present, a mobile video monitoring system is only used for checking urban conditions, when a shot accident occurs, only corresponding rescue telephones can be dialed, if the accident is serious, a plurality of rescue telephones need to be dialed for waiting for rescue, then a rescue center allocates vehicles to go to rescue according to alarm telephones, the existing mobile video monitoring system cannot timely inform rescue teams of rescue for rescue, and the problems that traffic paralysis and injured people cannot timely cure the accident easily occur.
Disclosure of Invention
The application provides a mobile video monitoring method, which comprises the following steps:
receiving real-time law enforcement videos and real-time geographic positions uploaded by a mobile law enforcement vehicle, and receiving real-time geographic positions and idle states uploaded by a law enforcement assistance vehicle;
performing primary analysis on a real-time law enforcement video uploaded by a mobile law enforcement vehicle, and identifying video data of an accident;
analyzing the identified accident video data according to a pre-trained video anomaly analysis model to obtain an accident handling result matched with a field accident, and determining the type of the law enforcement assistance vehicle to be dispatched according to the accident handling result;
determining the geographic positions of all the law enforcement assistance vehicles to be dispatched within the time range according to the time reported by the scene accident video and the types of the law enforcement assistance vehicles to be dispatched;
and determining the optimal law enforcement assisting vehicle according to the geographic position and the idle state of the law enforcement assisting vehicle to be dispatched and the real-time geographic information of the mobile law enforcement vehicle matched with the scene accident video, and sending an execution instruction to the optimal assisting vehicle.
The mobile video monitoring method comprises the steps of collecting various accident scene video data in advance, and marking according to accident abnormity types to obtain accident scene video data carrying accident abnormity type marks; and then inputting various accident scene video data carrying the object anomaly type identification as sample data into a convolutional neural network model for training to obtain a video anomaly analysis model.
The mobile video monitoring method as described above, wherein the input convolutional neural network model is trained to obtain a video anomaly analysis model, specifically including the following substeps:
extracting local behavior characteristics corresponding to various accident scene video data;
summarizing local behavior characteristics corresponding to various accident scene data to obtain multi-dimensional local behavior characteristics;
performing dimensionality reduction processing on the multi-dimensional local behavior characteristics to obtain illegal behavior characteristics of various accident abnormal types;
and classifying the illegal behavior characteristics corresponding to various accident abnormal types to obtain a video abnormal analysis model for identifying various accident abnormal types.
The mobile video monitoring method comprises the following steps of carrying out primary analysis on a real-time law enforcement video uploaded by a mobile law enforcement vehicle, and identifying video data of an accident, wherein the steps specifically comprise: and cutting the video reported by the mobile law enforcement vehicle into video frames, processing each video frame image, and determining the video frames which are detected that the distance between two adjacent vehicles in the video is within a preset range and the position does not move within a preset time in a traffic lane as the video data of the accident.
The mobile video monitoring method as described above, wherein the determining of the optimal law enforcement assistance vehicle is specifically that, after the accident is identified, the optimal law enforcement assistance vehicle is selected according to the real-time geographical position of the mobile law enforcement vehicle and the geographical position of the law enforcement assistance vehicle to be dispatched:
Figure BDA0002275735180000021
the method comprises the following steps that a mobile law enforcement vehicle arrives at an ith law enforcement assisting vehicle, wherein Ti is the time length required by the mobile law enforcement vehicle to arrive at the ith law enforcement assisting vehicle, i corresponding to the minimum value of Ti is the optimal assisting vehicle, Ti is the time length required by the optimal law enforcement assisting vehicle to arrive at a law enforcement scene, and n is the number of the law enforcement assisting vehicles; m is the number of predetermined road sections existing between the ith law enforcement assistance vehicle and the mobile law enforcement vehicle, SijA predetermined length, V, for a jth predetermined road segment existing between the geographic location of the ith law enforcement assistance vehicle and the geography of the mobile law enforcement vehicleijTi is the predetermined preparation time period for the ith law enforcement assistance vehicle for the predetermined travel speed for the jth predetermined road segment existing between the geographic location where the ith law enforcement assistance vehicle is located and the mobile law enforcement vehicle.
The present application further provides a mobile video monitoring center, including:
the internet of things communication module is used for receiving real-time law enforcement videos and real-time geographic positions uploaded by the mobile law enforcement vehicle and receiving real-time geographic positions and idle states uploaded by the law enforcement assistance vehicle;
the accident analysis module is used for carrying out primary analysis on the real-time law enforcement video uploaded by the mobile law enforcement vehicle and identifying the video data of the accident; the video anomaly analysis module is used for analyzing the identified accident video data according to a pre-trained video anomaly analysis model to obtain an accident processing result matched with the field accident;
the dispatching law enforcement aid vehicle module is used for determining the type of the law enforcement aid vehicle to be dispatched according to the accident processing result, and determining the geographic positions of all the law enforcement aid vehicles to be dispatched within the time range according to the time reported by the scene accident video and the type of the law enforcement aid vehicle to be dispatched; determining an optimal law enforcement assisting vehicle according to the geographic position and the idle state of the law enforcement assisting vehicle to be dispatched and the real-time geographic information of the mobile law enforcement vehicle matched with the scene accident video;
the internet of things communication module is further used for sending an execution instruction to the optimal aided vehicle.
The mobile video monitoring center further comprises a video anomaly analysis model training module, which is used for collecting video data of various accident sites in advance, and marking according to accident anomaly types to obtain accident site video data carrying accident anomaly type marks; then, inputting various accident scene video data carrying object abnormal type identification as sample data into a convolutional neural network model for training to obtain a video abnormal analysis model;
the accident analysis module is specifically used for extracting local behavior characteristics corresponding to various accident scene video data; summarizing local behavior characteristics corresponding to various accident scene data to obtain multi-dimensional local behavior characteristics; performing dimensionality reduction processing on the multi-dimensional local behavior characteristics to obtain illegal behavior characteristics of various accident abnormal types; and classifying the illegal behavior characteristics corresponding to various accident abnormal types to obtain a video abnormal analysis model for identifying various accident abnormal types.
The application also provides a mobile video monitoring system, which comprises the mobile video monitoring center, at least one mobile law enforcement vehicle and at least one law enforcement assistance vehicle;
the mobile law enforcement vehicle is used for collecting real-time law enforcement videos and uploading the real-time law enforcement videos and the real-time geographic position of the mobile law enforcement vehicle to the monitoring center;
the law enforcement assistance vehicle is used to upload to the monitoring center the real-time geographic location of the law enforcement assistance vehicle and the current idle status of the law enforcement assistance vehicle.
The mobile video monitoring system comprises a mobile law enforcement vehicle, a vehicle-mounted video acquisition module, a GPS positioning module and an Internet of things communication module, wherein the vehicle-mounted video acquisition module, the GPS positioning module and the Internet of things communication module are connected with a vehicle gauge control chip;
the vehicle-mounted video acquisition module is used for shooting law enforcement videos in real time through a vehicle-mounted camera in the running process of the mobile law enforcement vehicle and sending the real-time law enforcement videos of the vehicle to the vehicle gauge control chip;
the GPS positioning module is used for tracking and collecting the current geographic information (such as longitude and latitude information) of the mobile law enforcement vehicle in real time and sending the real-time geographic information of the vehicle to the vehicle gauge control chip;
the vehicle gauge control chip is used for sending the real-time law enforcement video and the real-time geographic information of the mobile law enforcement vehicle to the Internet of things communication module;
the internet of things communication module realizes communication between the mobile law enforcement vehicle and the monitoring center through the internet of things, and uploads real-time law enforcement videos and real-time geographic information of the mobile law enforcement vehicle to the monitoring center through the internet of things.
The mobile video monitoring system comprises a law enforcement assistance vehicle, a GPS (global positioning system) positioning module and an Internet of things communication module, wherein the law enforcement assistance vehicle comprises a vehicle regulation level control chip, and the GPS positioning module and the Internet of things communication module are connected with the vehicle regulation level control chip;
the GPS positioning module is used for tracking and collecting the current geographic information (such as longitude and latitude information) of the law enforcement assistance vehicle in real time and sending the real-time geographic information of the vehicle to the vehicle gauge control chip;
the vehicle gauge control chip is used for sending the real-time geographic information and the current idle state of the law enforcement assistance vehicle to the Internet of things communication module;
the internet of things communication module realizes communication between the law enforcement aid vehicle and the monitoring center through the internet of things, and real-time geographic information and the current idle state of the law enforcement aid vehicle are uploaded to the monitoring center through the internet of things.
The beneficial effect that this application realized is as follows: by adopting the mobile video monitoring system provided by the application, the accident can be determined through analyzing the law enforcement video shot by the mobile law enforcement vehicle, then the optimal law enforcement assisting vehicle is determined according to the real-time geographic position, the real-time geographic position of the law enforcement assisting vehicle and the idle state, and the rescue instruction is directly sent to the law enforcement assisting vehicle, so that the time consumed by intermediate links is reduced, the traffic paralysis problem is reduced, and the injured person can be timely cured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a schematic diagram of a mobile video surveillance system according to an embodiment of the present application;
FIG. 2 is a block diagram of a mobile video surveillance system according to an embodiment of the present application;
fig. 3 is a flowchart of a mobile video monitoring method according to a second embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example one
In one embodiment of the present application, there is provided a mobile video surveillance system, as shown in fig. 1, in a certain management area, the mobile video surveillance system includes a plurality of mobile law enforcement vehicles, a surveillance center and a plurality of law enforcement assistance vehicles, wherein:
a mobile law enforcement vehicle: the monitoring center: law enforcement assistance vehicle m: 1: n, (m, n > ═ 1).
The mobile law enforcement vehicle shoots law enforcement videos in real time and transmits the law enforcement videos to the monitoring center through the Internet of things; the law enforcement assisting vehicle also reports the position information of the law enforcement assisting vehicle to the monitoring center in real time through the Internet of things; the monitoring center analyzes the law enforcement video according to the video analysis model to obtain a mobile law enforcement vehicle needing assistance, determines an optimal law enforcement assistance vehicle according to the position of the mobile law enforcement vehicle needing assistance, sends an assistance instruction comprising the position of the mobile law enforcement vehicle to the law enforcement assistance vehicle, and the law enforcement assistance vehicle assists the mobile law enforcement vehicle according to the assistance instruction.
In the embodiment of the present application, as shown in fig. 2, a mobile law enforcement vehicle 210 (such as a city management law enforcement vehicle, a social security mobile command vehicle, etc.) includes a vehicle-level control chip 211 (i.e., an electronic product standard chip conforming to vehicle safety), and an on-vehicle video acquisition module 212, a GPS positioning module 213, and an internet of things communication module 214 connected to the vehicle-level control chip 211;
the vehicle-mounted video acquisition module is used for shooting law enforcement videos in real time through a vehicle-mounted camera in the running process of the mobile law enforcement vehicle and sending the real-time law enforcement videos of the vehicle to the vehicle gauge control chip;
the GPS positioning module is used for tracking and collecting the current geographic information (such as longitude and latitude information) of the mobile law enforcement vehicle in real time and sending the real-time geographic information of the vehicle to the vehicle gauge control chip;
the vehicle gauge control chip is used for sending the real-time law enforcement video and the real-time geographic information of the mobile law enforcement vehicle to the Internet of things communication module;
the communication module of the internet of things realizes the communication between the mobile law enforcement vehicle and the monitoring center through the internet of things, wherein the data transmitted to the monitoring center also comprises the serial number of the mobile law enforcement vehicle.
In the embodiment of the present application, as shown in FIG. 2, the law enforcement assistance vehicle 220 includes an ambulance, a trailer, a traffic police cruiser; the law enforcement assistance vehicle 220 comprises a vehicle regulation level control chip 221, and a GPS positioning module 222 and an Internet of things communication module 223 which are connected with the vehicle regulation level control chip 221;
the GPS positioning module is used for tracking and collecting the current geographic information (such as longitude and latitude information) of the law enforcement assistance vehicle in real time and sending the real-time geographic information of the vehicle to the vehicle gauge control chip;
the vehicle gauge control chip is used for sending the real-time geographic information of the law enforcement assistance vehicle to the Internet of things communication module;
the internet of things communication module realizes communication between the law enforcement assistant vehicle and the monitoring center through the internet of things, wherein data transmitted to the monitoring center further comprises the number of the law enforcement assistant vehicle and an identifier indicating whether the current vehicle is idle (identifier setting indicates that the vehicle is idle, and identifier resetting indicates that the vehicle is busy), if the current law enforcement assistant vehicle is in an idle waiting state, a set vehicle idle identifier indicating that assistance can be executed is uploaded to the monitoring center, if the current law enforcement assistant vehicle is processing a law enforcement task, a reset vehicle idle identifier indicating that the task is being executed is uploaded to the monitoring center, and the identifier is modified into a set after the vehicle executes the task.
In the embodiment of the present application, as shown in fig. 2, the monitoring center 230 is configured to receive real-time law enforcement videos and real-time geographic information of law enforcement vehicles transmitted by a plurality of mobile law enforcement vehicles 210, determine positions and running tracks of the law enforcement vehicles according to the real-time geographic information of the mobile law enforcement vehicles 210, and store the real-time law enforcement videos; and is also configured to receive law enforcement assistance vehicle real-time geographic information transmitted by the plurality of law enforcement assistance vehicles 220;
specifically, the monitoring center 230 includes an internet of things communication module 231, an accident analysis module 232, and a dispatching law enforcement assistance vehicle module 233;
the internet of things communication module 231 is used for receiving the real-time law enforcement video and the real-time geographic position uploaded by the mobile law enforcement vehicle, and receiving the real-time geographic position and the idle state uploaded by the law enforcement assistance vehicle;
the accident analysis module 232 is used for carrying out primary analysis on the real-time law enforcement video uploaded by the mobile law enforcement vehicle and identifying video data of an accident; the video anomaly analysis module is used for analyzing the identified accident video data according to a pre-trained video anomaly analysis model to obtain an accident processing result matched with the field accident;
the dispatching law enforcement assistant vehicle module 233 is used for determining the type of the law enforcement assistant vehicle to be dispatched according to the accident processing result, and determining the geographic positions of all the law enforcement assistant vehicles to be dispatched within the time range according to the time reported by the scene accident video and the type of the law enforcement assistant vehicle to be dispatched; determining an optimal law enforcement assisting vehicle according to the geographic position and the idle state of the law enforcement assisting vehicle to be dispatched and the real-time geographic information of the mobile law enforcement vehicle matched with the scene accident video;
the internet of things communication module 231 is further configured to send an execution instruction to the optimal assisted vehicle;
furthermore, the monitoring center also comprises a video abnormity analysis model training module which is used for collecting various accident scene video data in advance and marking according to accident abnormity types to obtain accident scene video data carrying accident abnormity type marks; then, inputting various accident scene video data carrying object abnormal type identification as sample data into a convolutional neural network model for training to obtain a video abnormal analysis model;
correspondingly, the accident analysis module is specifically used for extracting local behavior characteristics corresponding to various accident scene video data; summarizing local behavior characteristics corresponding to various accident scene data to obtain multi-dimensional local behavior characteristics; performing dimensionality reduction processing on the multi-dimensional local behavior characteristics to obtain illegal behavior characteristics of various accident abnormal types; and classifying the illegal behavior characteristics corresponding to various accident abnormal types to obtain a video abnormal analysis model for identifying various accident abnormal types.
And the monitoring center is also provided with a video processing module which is used for processing the video according to a pre-trained video abnormity analysis model after receiving the real-time law enforcement video and determining the optimal assistance vehicle according to the processing result and the real-time geographic information of the mobile law enforcement vehicle.
Example two
An embodiment of the present application provides a mobile video monitoring method, as shown in fig. 3, where the method is executed by a monitoring center, and specifically includes the following sub-steps:
step 310, pre-constructing and training a video anomaly analysis model based on a neural network;
specifically, various accident scene video data are collected in advance, and are marked according to accident abnormity types to obtain accident scene video data carrying accident abnormity type marks; and then inputting various accident scene video data carrying the object anomaly type identification as sample data into a neural network model for training to obtain a video anomaly analysis model.
In the embodiment of the application, the training of the video anomaly analysis model based on the neural network specifically comprises the following substeps:
311, extracting local behavior characteristics corresponding to various accident scene video data, summarizing the local behavior characteristics corresponding to various accident scene data, and obtaining multi-dimensional local behavior characteristics;
specifically, a one-dimensional convolution kernel w ∈ R is utilized in the convolution layera*hPerforming feature extraction on the vector matrix D to obtain a feature value Cn, wherein a represents the dimension of a vector, and h represents the size of a one-dimensional convolution kernel window;
specifically, features are extracted in the convolutional layer using the following formula:
Cn=f(w·xn:n+h-1+b)
n represents the number of convolution operation, m represents the number of convolution kernels, h represents the window size of a one-dimensional convolution kernel, n + h-1 represents n to n + h-1, f represents a nonlinear activation function, represents the corresponding operation of the sharing weight of the convolution kernels and the feature vector, x represents the input value of the feature vector matrix, w represents the weight, and b represents the deviation value;
then, the characteristic value is further extracted in the pooling layer by the following formula:
pv=max[Cn]
where n represents the number of convolution operations. The features obtained by convolution are further classified by sampling of the pooling layer. This prevents over-fitting and enhances the robustness of the structure.
313, performing dimensionality reduction treatment on the multi-dimensional local behavior characteristics to obtain illegal behavior characteristics of various accident abnormal types;
and step 314, classifying the illegal behavior characteristics corresponding to the various accident abnormal types to obtain an accident abnormal analysis model for identifying the various accident abnormal types.
Step 320, receiving real-time law enforcement videos and real-time geographic positions uploaded by the mobile law enforcement vehicles, and receiving real-time geographic positions and idle states uploaded by the law enforcement assistance vehicles;
in the embodiment of the application, the mobile law enforcement vehicle and the law enforcement assistance vehicle are communicated with the monitoring center through the Internet of things, and the mobile law enforcement vehicle reports a real-time law enforcement video, a real-time geographic position and the serial number of the mobile law enforcement vehicle to the monitoring center; the law enforcement assistance vehicles report to the monitoring center the real-time geographic location and the current idle state, as well as the law enforcement assistance vehicle number.
Step 330, performing primary analysis on the video reported by the mobile law enforcement vehicle, and identifying the video data of the accident;
specifically, a video reported by a mobile law enforcement vehicle is cut into video frames, then each video frame image is processed, and video data of an accident is identified according to a preset identification method, wherein the preset identification method comprises the steps of detecting 15 pixel points (suspected to be collided) in the distance between two adjacent vehicles, and enabling the vehicles in a lane not to move within a preset time (suspected to be parked after collision).
Step 340, analyzing the identified accident video data according to the trained video anomaly analysis model to obtain an accident handling result matched with the field accident, and determining the type of the law enforcement assistance vehicle to be dispatched according to the accident handling result;
specifically, the recognized accident video data is input into a trained video abnormity analysis model, the accident abnormity type matched with the video is recognized, then the law enforcement assistance vehicle of which the type is to be dispatched is determined according to the accident abnormity type, for example, if the accident abnormity type is analyzed to be serious collision of the vehicle, the type of the law enforcement assistance vehicle to be dispatched is selected to be a trailer and an ambulance, and if the accident type is analyzed to be only scratch, the type of the law enforcement assistance vehicle to be dispatched is selected to be a traffic police patrol vehicle.
Optionally, before inputting the identified accident video data into the video anomaly analysis model, the method further includes preprocessing the accident video data, and specifically includes the following sub-steps:
step 341, judging whether the accident video has a fuzzy image, if so, executing step 342, otherwise, directly inputting the accident video data into a video anomaly analysis model;
step 342, decoding the video stream to form a plurality of frame images, and searching for blurred frame images in the plurality of frame images;
specifically, after the video is decoded, a plurality of frame images are formed, the plurality of frame images are screened one by one, one or more frame images with errors are found, and after the frame images with errors are screened, an area with errors needs to be determined in the frame images. If the wrong region is screened in a complete frame image, screening errors caused by background or background color can occur;
therefore, it is necessary to segment the region block of the frame image and search for an error region according to the region, and in particular, the segmentation of the region of the error frame image may specifically include the following sub-steps:
step 342-1: the frame image is divided into two parts according to the gray scale.
Step 342-2: and calculating the average gray scale of the divided frame images.
Step 342-3: the variance is determined from the mean gray level.
Step 342-4: and traversing any two selected gray levels, completing the calculation of the steps 342-2-342-3, and determining the segmentation variance.
Step 342-5: and determining an optimal division threshold according to the segmentation variance, and performing optimal segmentation on the background and non-background frame images.
Step 343, finding an error initial area block in the error area, sequentially reconstructing the error area, and encoding the reconstructed frame image to form a video image;
the reconstruction of the starting region block is specifically the reconstruction of a pixel value, and the pixel value P (i.j) of the reconstruction coordinate may be specifically expressed as:
Figure BDA0002275735180000081
wherein d is0For the reconstructed coordinates of the selected starting region block and the left adjacent regionDistance of block pixel coordinates, P0Pixel values representing pixel coordinates of the right adjacent region block; d1Reconstructing the distance between the coordinates and the pixel coordinate center point of the right adjacent region block for the initial region block, P1Pixel values representing pixel coordinates of the left adjacent region block; d2Reconstructing the distance between the coordinates and the center point of the pixel coordinates of the upper adjacent area block for the starting area block, P2Pixel values representing pixel coordinates of the lower adjacent region block; d3Reconstructing the distance between the coordinates and the center point of the pixel coordinates of the lower adjacent area block for the starting area block, P3A pixel value indicating the pixel coordinate of the upper adjacent region block.
Step 350, determining the geographic positions of all the law enforcement assistance vehicles to be dispatched within the time range according to the time reported by the scene accident video and the types of the law enforcement assistance vehicles to be dispatched;
specifically, when the video reported by the mobile law enforcement vehicle is analyzed to belong to the abnormal video with the accident, the positions of all the law enforcement vehicles corresponding to the law enforcement vehicles to be dispatched at the time point are searched in the monitoring center according to the time reported by the video; optionally, the location of the accident and the location of all law enforcement assistance vehicles to be dispatched are marked on the display screen of the monitoring center.
Step 360, acquiring the real-time geographic position of the mobile law enforcement vehicle for shooting the scene accident, and determining the optimal assistance vehicle according to the geographic position of the law enforcement assistance vehicle to be dispatched and the real-time geographic information of the mobile law enforcement vehicle;
the method comprises the steps that when a mobile law enforcement vehicle reports a law enforcement video, the real-time geographic position of the mobile law enforcement vehicle is reported, and when an accident is identified, the monitoring center selects an optimal law enforcement assistance vehicle according to the real-time geographic position of the mobile law enforcement vehicle and the geographic position of a law enforcement assistance vehicle to be dispatched; if the vehicle collision is analyzed to be serious, the selected optimal law enforcement aid vehicle is the trailer and the ambulance which are nearest and idle, and if the vehicle is analyzed to be only cut off, the selected optimal law enforcement aid vehicle is the traffic police patrol vehicle which is nearest and idle;
in the embodiment of the application, the optimal law enforcement assistance vehicle is determined according to the following formula:
Figure BDA0002275735180000091
the method comprises the following steps that a mobile law enforcement vehicle arrives at an ith law enforcement assisting vehicle, wherein Ti is the time length required by the mobile law enforcement vehicle to arrive at the ith law enforcement assisting vehicle, i corresponding to the minimum value of Ti is the optimal assisting vehicle, Ti is the time length required by the optimal law enforcement assisting vehicle to arrive at a law enforcement scene, and n is the number of the law enforcement assisting vehicles; m is the number of predetermined road sections existing between the ith law enforcement assistance vehicle and the mobile law enforcement vehicle, SijA predetermined length, V, for a jth predetermined road segment existing between the geographic location of the ith law enforcement assistance vehicle and the geography of the mobile law enforcement vehicleijTi is a predetermined preparation duration for the ith law enforcement assistance vehicle for a predetermined travel speed for the jth predetermined road segment existing between the geographic location of the ith law enforcement assistance vehicle and the mobile law enforcement vehicle;
for example, in the running process of a city management law enforcement vehicle, a vehicle collision situation is shot through a vehicle-mounted camera, a vehicle collision video is uploaded to a monitoring center, the monitoring center processes the video by using a pre-trained video anomaly analysis model after receiving the shot real-time law enforcement video, the serious situation of the vehicle collision is analyzed from the video, an optimal law enforcement assistance vehicle needing assistance is selected according to the serious situation of the vehicle collision and the located geographic position, if the serious situation of the vehicle collision is analyzed, the selected optimal law enforcement assistance vehicle is a trailer and an ambulance which are nearest and idle, and if the vehicle is analyzed to be only cut off, the selected optimal law enforcement assistance vehicle is a patrol vehicle which is nearest and idle.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A mobile video monitoring method is characterized by comprising the following steps:
receiving real-time law enforcement videos and real-time geographic positions uploaded by a mobile law enforcement vehicle, and receiving real-time geographic positions and idle states uploaded by a law enforcement assistance vehicle;
performing primary analysis on a real-time law enforcement video uploaded by a mobile law enforcement vehicle, and identifying video data of an accident;
analyzing the identified accident video data according to a pre-trained video anomaly analysis model to obtain an accident handling result matched with a field accident, and determining the type of the law enforcement assistance vehicle to be dispatched according to the accident handling result;
determining the geographic positions of all the law enforcement assistance vehicles to be dispatched in the time point according to the time reported by the scene accident video and the types of the law enforcement assistance vehicles to be dispatched;
determining an optimal law enforcement assisting vehicle according to the geographic position and the idle state of the law enforcement assisting vehicle to be dispatched and the real-time geographic information of the mobile law enforcement vehicle matched with the scene accident video, and sending an execution instruction to the optimal assisting vehicle;
before inputting the identified accident video data into the video anomaly analysis model, the method also comprises the following steps of preprocessing the accident video data:
step 341, judging whether the accident video has a fuzzy image, if so, executing step 342, otherwise, directly inputting the accident video data into a video anomaly analysis model;
step 342, decoding the video stream to form a plurality of frame images, and searching for blurred frame images in the plurality of frame images;
decoding a video to form a plurality of frame images, screening the plurality of frame images one by one, finding out one or more wrong frame images, and determining a region with a fault in the frame images after screening the wrong frame images; specifically, dividing a frame image into region blocks, and searching for an error region according to the region; the segmentation of the region of the error frame image specifically comprises the following sub-steps:
step 342-1: dividing a frame image into two parts according to gray level;
step 342-2: calculating the average gray scale of the divided frame image;
step 342-3: determining a variance according to the average gray level;
step 342-4: traversing any two selected gray levels, completing the calculation of the steps 342-2-342-3, and determining a segmentation variance;
step 342-5: determining an optimal division threshold according to the segmentation variance, and performing optimal segmentation on the background and non-background frame images;
step 343, finding an error initial area block in the error area, sequentially reconstructing the error area, and encoding the reconstructed frame image to form a video image;
the reconstruction of the starting region block is specifically the reconstruction of a pixel value, and the pixel value P (i.j) of the reconstructed coordinate may be specifically expressed as:
Figure FDA0002780907670000011
wherein d is0The distance between the reconstructed coordinates of the selected initial region block and the pixel coordinates of the left adjacent region block, P0Pixel values representing pixel coordinates of the right adjacent region block; d1Reconstructing the distance between the coordinates and the pixel coordinate center point of the right adjacent region block for the initial region block, P1Pixel values representing pixel coordinates of the left adjacent region block; d2Reconstructing the distance between the coordinates and the center point of the pixel coordinates of the upper adjacent area block for the starting area block, P2Pixel values representing pixel coordinates of the lower adjacent region block; d3Reconstructing the distance between the coordinates and the center point of the pixel coordinates of the lower adjacent area block for the starting area block, P3A pixel value indicating the pixel coordinate of the upper adjacent region block.
2. The mobile video monitoring method according to claim 1, wherein various accident scene video data are collected in advance and marked according to accident abnormity types to obtain accident scene video data carrying accident abnormity type marks; and then inputting various accident scene video data carrying the object anomaly type identification as sample data into a convolutional neural network model for training to obtain a video anomaly analysis model.
3. The mobile video monitoring method according to claim 2, wherein the convolutional neural network model is input for training to obtain a video anomaly analysis model, and the method specifically comprises the following substeps:
extracting local behavior characteristics corresponding to various accident scene video data;
summarizing local behavior characteristics corresponding to various accident scene data to obtain multi-dimensional local behavior characteristics;
performing dimensionality reduction processing on the multi-dimensional local behavior characteristics to obtain illegal behavior characteristics of various accident abnormal types;
and classifying the illegal behavior characteristics corresponding to various accident abnormal types to obtain a video abnormal analysis model for identifying various accident abnormal types.
4. The mobile video monitoring method according to claim 1, wherein the real-time law enforcement video uploaded by the mobile law enforcement vehicle is primarily analyzed to identify video data of an accident, and the specific steps are as follows: and cutting the video reported by the mobile law enforcement vehicle into video frames, processing each video frame image, and determining the video frames which are detected that the distance between two adjacent vehicles in the video is within a preset range and the position does not move within a preset time in a traffic lane as the video data of the accident.
5. The mobile video surveillance method of claim 1, wherein determining the optimal law enforcement assistance vehicle is specifically selecting the optimal law enforcement assistance vehicle based on the real-time geographic location of the mobile law enforcement vehicle and the geographic location of the law enforcement vehicle to be dispatched, upon identifying the occurrence of the accident:
Figure FDA0002780907670000021
wherein Ti is the time length required for the mobile law enforcement vehicle to reach the ith law enforcement assistance vehicle, i corresponding to the minimum value of Ti is the optimal assistance vehicle, and n is the number of the law enforcement assistance vehicles; m is the number of the preset road sections existing between the ith law enforcement assistance vehicle and the mobile law enforcement vehicle, Sij is the preset length of the jth preset road section existing between the geographical position of the ith law enforcement assistance vehicle and the geographical position of the mobile law enforcement vehicle, Vij is the preset driving speed of the jth preset road section existing between the geographical position of the ith law enforcement assistance vehicle and the mobile law enforcement vehicle, and ti is the preset preparation duration of the ith law enforcement assistance vehicle.
6. A mobile video surveillance center, characterized in that it performs the mobile video surveillance method according to any one of claims 1-5; the mobile video monitoring center includes:
the internet of things communication module is used for receiving real-time law enforcement videos and real-time geographic positions uploaded by the mobile law enforcement vehicle and receiving real-time geographic positions and idle states uploaded by the law enforcement assistance vehicle;
the accident analysis module is used for carrying out primary analysis on the real-time law enforcement video uploaded by the mobile law enforcement vehicle and identifying the video data of the accident; the video anomaly analysis module is used for analyzing the identified accident video data according to a pre-trained video anomaly analysis model to obtain an accident processing result matched with the field accident;
the dispatching law enforcement aid vehicle module is used for determining the type of the law enforcement aid vehicle to be dispatched according to the accident processing result, and determining the geographic positions of all the law enforcement aid vehicles to be dispatched in the time point according to the time reported by the scene accident video and the type of the law enforcement aid vehicle to be dispatched; determining an optimal law enforcement assisting vehicle according to the geographic position and the idle state of the law enforcement assisting vehicle to be dispatched and the real-time geographic information of the mobile law enforcement vehicle matched with the scene accident video;
the internet of things communication module is further used for sending an execution instruction to the optimal aided vehicle.
7. The mobile video monitoring center according to claim 6, further comprising a video anomaly analysis model training module for collecting various types of accident scene video data in advance, and marking according to the accident anomaly type to obtain the accident scene video data carrying the accident anomaly type identifier; then, inputting various accident scene video data carrying object abnormal type identification as sample data into a convolutional neural network model for training to obtain a video abnormal analysis model;
the accident analysis module is specifically used for extracting local behavior characteristics corresponding to various accident scene video data; summarizing local behavior characteristics corresponding to various accident scene data to obtain multi-dimensional local behavior characteristics; performing dimensionality reduction processing on the multi-dimensional local behavior characteristics to obtain illegal behavior characteristics of various accident abnormal types; and classifying the illegal behavior characteristics corresponding to various accident abnormal types to obtain a video abnormal analysis model for identifying various accident abnormal types.
8. A mobile video surveillance system comprising a mobile video surveillance center according to any of claims 6-7, and at least one mobile law enforcement vehicle and at least one law enforcement assistance vehicle;
the mobile law enforcement vehicle is used for collecting real-time law enforcement videos and uploading the real-time law enforcement videos and the real-time geographic position of the mobile law enforcement vehicle to the monitoring center;
the law enforcement assistance vehicle is used to upload to the monitoring center the real-time geographic location of the law enforcement assistance vehicle and the current idle status of the law enforcement assistance vehicle.
9. The mobile video surveillance system of claim 8, characterized in that the mobile law enforcement vehicle comprises a vehicle-scale control chip and a vehicle-mounted video acquisition module, a GPS positioning module and an internet of things communication module connected with the vehicle-scale control chip;
the vehicle-mounted video acquisition module is used for shooting law enforcement videos in real time through a vehicle-mounted camera in the running process of the mobile law enforcement vehicle and sending the real-time law enforcement videos of the vehicle to the vehicle gauge control chip;
the GPS positioning module is used for tracking and collecting the current geographic information of the mobile law enforcement vehicle in real time and sending the real-time geographic information of the vehicle to the vehicle gauge control chip;
the vehicle gauge control chip is used for sending the real-time law enforcement video and the real-time geographic information of the mobile law enforcement vehicle to the Internet of things communication module;
the internet of things communication module realizes communication between the mobile law enforcement vehicle and the monitoring center through the internet of things, and uploads real-time law enforcement videos and real-time geographic information of the mobile law enforcement vehicle to the monitoring center through the internet of things.
10. The mobile video surveillance system of claim 8, wherein the law enforcement assistance vehicle comprises a vehicle scale level control chip and a GPS positioning module and an internet of things communication module connected to the vehicle scale level control chip;
the GPS positioning module is used for tracking and collecting the current geographic information of the law enforcement assistance vehicle in real time and sending the real-time geographic information of the vehicle to the vehicle gauge control chip;
the vehicle gauge control chip is used for sending the real-time geographic information and the current idle state of the law enforcement assistance vehicle to the Internet of things communication module;
the internet of things communication module realizes communication between the law enforcement aid vehicle and the monitoring center through the internet of things, and real-time geographic information and the current idle state of the law enforcement aid vehicle are uploaded to the monitoring center through the internet of things.
CN201911122143.9A 2019-11-15 2019-11-15 Mobile video monitoring method, monitoring center and system Active CN110838230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911122143.9A CN110838230B (en) 2019-11-15 2019-11-15 Mobile video monitoring method, monitoring center and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911122143.9A CN110838230B (en) 2019-11-15 2019-11-15 Mobile video monitoring method, monitoring center and system

Publications (2)

Publication Number Publication Date
CN110838230A CN110838230A (en) 2020-02-25
CN110838230B true CN110838230B (en) 2020-12-22

Family

ID=69576649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911122143.9A Active CN110838230B (en) 2019-11-15 2019-11-15 Mobile video monitoring method, monitoring center and system

Country Status (1)

Country Link
CN (1) CN110838230B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258089A (en) * 2020-11-13 2021-01-22 珠海大横琴科技发展有限公司 Control method and device for mobile law enforcement
CN112613396B (en) * 2020-12-19 2022-10-25 河北志晟信息技术股份有限公司 Task emergency degree processing method and system
CN113596136A (en) * 2021-07-23 2021-11-02 深圳市警威警用装备有限公司 Aid communication method based on law enforcement recorder
CN113581323B (en) * 2021-08-02 2022-08-16 安徽农道文旅产业发展有限公司 Marathon intelligent track management system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170120849A (en) * 2016-04-22 2017-11-01 주식회사 이니셜티 Method and system for gathering information of car accident situation by video recording
CN108765453A (en) * 2018-05-18 2018-11-06 武汉倍特威视系统有限公司 Expressway fog recognition methods based on video stream data

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201364497Y (en) * 2009-01-12 2009-12-16 甘灿新 Remote-control far-distance monitoring and loading unmanned helicopter system
KR20150083227A (en) * 2014-01-09 2015-07-17 유동배 Smartphone application to detect and report traffiic accident automatically
CN103996288A (en) * 2014-06-06 2014-08-20 中道汽车救援产业有限公司 Method for automatic matching and dispatching for roadside rescue
CN105184504A (en) * 2015-09-24 2015-12-23 上海车音网络科技有限公司 Vehicle scheduling method, apparatus and system
CN106682644B (en) * 2017-01-11 2019-08-27 同观科技(深圳)有限公司 A kind of double acting state vehicle monitoring management system and method based on dollying equipment
CN106910018A (en) * 2017-02-23 2017-06-30 吉林大学 One kind rescue resource regulating method and system
CN108629963A (en) * 2017-03-24 2018-10-09 纵目科技(上海)股份有限公司 Traffic accident report method based on convolutional neural networks and system, car-mounted terminal
CN108881867A (en) * 2018-09-12 2018-11-23 上海良相智能化工程有限公司 Intelligent municipal administration's information visualization total management system
CN109255944B (en) * 2018-10-08 2021-08-17 长安大学 Configuration and dispatching method for traffic accident emergency rescue vehicle
CN110033011A (en) * 2018-12-14 2019-07-19 阿里巴巴集团控股有限公司 Traffic accident Accident Handling Method and device, electronic equipment
CN110033131A (en) * 2019-03-28 2019-07-19 南京理工大学 A kind of the intelligent decision system module and its working method of traffic rescue
CN110047270B (en) * 2019-04-09 2022-08-09 上海丰豹商务咨询有限公司 Method for emergency management and road rescue on automatic driving special lane

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170120849A (en) * 2016-04-22 2017-11-01 주식회사 이니셜티 Method and system for gathering information of car accident situation by video recording
CN108765453A (en) * 2018-05-18 2018-11-06 武汉倍特威视系统有限公司 Expressway fog recognition methods based on video stream data

Also Published As

Publication number Publication date
CN110838230A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
CN110838230B (en) Mobile video monitoring method, monitoring center and system
CN108701396B (en) Detection and alarm method for accumulated snow and icing in front of vehicle, storage medium and server
US11734783B2 (en) System and method for detecting on-street parking violations
CN110298300B (en) Method for detecting vehicle illegal line pressing
CN111833598B (en) Automatic traffic incident monitoring method and system for unmanned aerial vehicle on highway
CN110895662A (en) Vehicle overload alarm method and device, electronic equipment and storage medium
CN112071084A (en) Method and system for judging illegal parking by utilizing deep learning
CN111008574A (en) Key person track analysis method based on body shape recognition technology
CN113450573A (en) Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN114648748A (en) Motor vehicle illegal parking intelligent identification method and system based on deep learning
CN111985295A (en) Electric bicycle behavior recognition method and system, industrial personal computer and camera
CN114170272A (en) Accident reporting and storing method based on sensing sensor in cloud environment
CN114708532A (en) Monitoring video quality evaluation method, system and storage medium
CN113076852A (en) Vehicle-mounted snapshot processing system occupying bus lane based on 5G communication
CN114693722B (en) Vehicle driving behavior detection method, detection device and detection equipment
CN112633163B (en) Detection method for realizing illegal operation vehicle detection based on machine learning algorithm
CN116311166A (en) Traffic obstacle recognition method and device and electronic equipment
CN111627224A (en) Vehicle speed abnormality detection method, device, equipment and storage medium
CN113723273A (en) Vehicle track information determination method and device and computer equipment
CN114283361A (en) Method and apparatus for determining status information, storage medium, and electronic apparatus
CN113793069A (en) Urban waterlogging intelligent identification method of deep residual error network
CN113989731A (en) Information detection method, computing device and storage medium
CN115375978B (en) Behavior information determination method and apparatus, storage medium, and electronic apparatus
CN116884214B (en) Monitoring and early warning method and system for municipal vehicle
CN114743140A (en) Fire fighting access occupation identification method and device based on artificial intelligence technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant