WO2020024457A1 - 交通事故的定责方法、装置及计算机可读存储介质 - Google Patents

交通事故的定责方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2020024457A1
WO2020024457A1 PCT/CN2018/111701 CN2018111701W WO2020024457A1 WO 2020024457 A1 WO2020024457 A1 WO 2020024457A1 CN 2018111701 W CN2018111701 W CN 2018111701W WO 2020024457 A1 WO2020024457 A1 WO 2020024457A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior
target
target feature
illegal
traffic accident
Prior art date
Application number
PCT/CN2018/111701
Other languages
English (en)
French (fr)
Inventor
唐雯静
黄章成
王健宗
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020024457A1 publication Critical patent/WO2020024457A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Definitions

  • the present application relates to the technical field of traffic accident processing, and in particular, to a method, a device, and a computer non-volatile readable storage medium for determining a traffic accident.
  • the embodiments of the present application provide a method, a device, and a computer non-volatile readable storage medium for determining a traffic accident, which solves the problem that the related technology requires a great deal of manpower to determine the responsibility for a traffic accident.
  • a method for determining a traffic accident includes:
  • the behavior laws corresponding to the clustering set of each target feature are input to a pre-built illegal behavior detection model for behavior detection, and the behavior detection results of each target feature are obtained, and the behavior detection results are used to detect traffic accidents.
  • a traffic accident contingency report is generated according to the behavior detection results of each target characteristic.
  • a traffic accident determination device includes:
  • An obtaining unit configured to obtain a video information stream within a preset time period corresponding to a traffic accident
  • a clustering unit configured to cluster each target feature in the video information stream to obtain a behavior rule corresponding to each target feature clustering set in a preset time period corresponding to a traffic accident;
  • a detection unit configured to input a behavior rule corresponding to each target feature clustering set into a pre-built illegal behavior detection model for behavior detection, and obtain behavior detection results of each target feature;
  • the illegal behavior detection model is used for Detecting the types of illegal behaviors of each target characteristic within a preset time period corresponding to the occurrence of a traffic accident;
  • a generating unit is configured to generate a traffic accident contingency report according to the behavior detection results of each target feature.
  • a computer non-volatile readable storage medium in which computer-readable instructions are stored, and when the program is executed by a processor, the following steps are implemented:
  • the behavior law corresponding to the clustering set of each target feature is input to a pre-built illegal behavior detection model for behavior detection, and the behavior detection results of each target feature are obtained.
  • the illegal behavior detection model is used to detect the occurrence of a traffic accident. Types of illegal behaviors of each target characteristic in the corresponding preset time period;
  • a traffic accident contingency report is generated according to the behavior detection results of each target characteristic.
  • a computer device including a memory, a processor, and computer-readable instructions stored on the memory and executable on the processor.
  • the processor executes the program, the following is implemented: step:
  • the behavior law corresponding to the clustering set of each target feature is input to a pre-built illegal behavior detection model for behavior detection, and the behavior detection results of each target feature are obtained.
  • the illegal behavior detection model is used to detect the occurrence of a traffic accident. Types of illegal behaviors of each target characteristic in the corresponding preset time period;
  • a traffic accident contingency report is generated according to the behavior detection results of each target characteristic.
  • the embodiment of the present application does not need to arrange a traffic police to investigate the accident site, reduces the workload of the traffic police, and can determine the responsibility for illegal acts during the driving of the vehicle. Increase driver awareness of traffic regulations.
  • FIG. 1 is a flowchart of a method for determining a traffic accident according to an embodiment of the present application
  • FIG. 2 is a flowchart of another method for determining a traffic accident according to an embodiment of the present application
  • FIG. 3 is a flowchart of constructing an illegal behavior detection model according to an embodiment of the present application.
  • FIG. 4 is a structural block diagram of a device for determining a traffic accident according to an embodiment of the present application
  • FIG. 5 is a structural block diagram of another apparatus for determining a traffic accident according to an embodiment of the present application.
  • FIG. 6 is a block diagram of a traffic accident determination device 400 according to an embodiment of the present application.
  • FIG. 1 is a first flowchart according to an embodiment of the present application. As shown in FIG. 1, the process includes the following steps:
  • Step S101 Obtain a video information stream within a preset time period corresponding to a traffic accident
  • the preset time period may be the first 30 seconds, 60 seconds, etc. of the traffic accident.
  • the preset time period may also be the time interval of the traffic accident. For example, if the time of the traffic accident is 9:10, the pre-selected time period is selected.
  • the time period is set from 9 to 9:20, which is not limited in the embodiment of the present application.
  • the embodiment of the present application obtains a video information stream within a preset time period corresponding to a traffic accident, retrieves a video of the traffic accident process, and then analyzes the traffic accident process so as to determine the accident.
  • Step S102 clustering each target feature in the video information stream to obtain a behavior rule corresponding to each target feature clustering set within a preset time period corresponding to a traffic accident;
  • the target feature is a feature appearing in the video information stream within a preset time period corresponding to the occurrence of a traffic accident, such as a vehicle, a zebra crossing, a lawn, a person, and the like.
  • the video information can be specifically detected by a time difference method or an optical flow method.
  • Each target feature in the stream is then extracted from the detected motion area.
  • a clustering set of each target feature is obtained, and the clustering set of each target feature is equivalent to detection in a video information stream.
  • the behavior rule corresponding to the target feature clustering set can be the moving trajectory of the target feature in the video information stream.
  • the moving trajectory of the vehicle m is from position a to position b, and the moving root trajectory of pedestrian n is in place.
  • the pedestrian trajectory of the pedestrian n is crossing the road, which is not limited in the embodiment of the present application.
  • the behavior rules of the above target features not only include the moving trajectory of the target feature in the video information stream, but also include appearance features such as the color, size, and shape change of the target feature.
  • the target feature is pedestrian p
  • pedestrian The appearance feature of p is falling on the ground
  • the target feature is a vehicle
  • the appearance feature of the vehicle is deformation.
  • the embodiments of the present application form clusters of each target feature in the video information stream to form
  • the target feature clustering set which may be continuous or static data of the target feature, may further determine a behavior rule corresponding to the target feature clustering set within a preset time period corresponding to a traffic accident.
  • step S103 the behavior rule corresponding to each target feature clustering set is input to a pre-built illegal behavior detection model for behavior detection, and the behavior detection results of each target feature are obtained.
  • the illegal behavior detection model is a detection model constructed to detect the types of illegal behaviors corresponding to the target characteristic behavior laws.
  • the illegal behavior detection model records the mapping relationship between the behavior characteristics of target characteristics and the types of illegal behaviors in order to facilitate traffic accidents.
  • the behavior regularity of each target feature is identified.
  • the behavior detection result is the type of illegal behavior with target characteristics, such as illegal driving directions of vehicles, pedestrians are traffic lights, etc.
  • the convolutional neural network here is composed of a multi-layer structure, each layer of the structure has different input and output parameters and implements different functions.
  • the law of traffic violations of each target feature collected in advance can be repeatedly trained and summarized.
  • the motion characteristics and behavior paradigms of different illegal behaviors are established, and an illegal behavior detection model is constructed.
  • the illegal behavior detection model records the mapping relationship between the behavior characteristics of target characteristics and the types of illegal behaviors.
  • the illegal behavior detection model can detect the behavior characteristics of target characteristics Corresponding types of violations.
  • the above-mentioned traffic violation behavior laws of each target feature can be the movement behavior characteristics of the traffic violation behavior laws described in multiple dimensions.
  • the time dimension can be used to describe the movement characteristics of the illegal behavior laws, which can be specifically based on different time points.
  • the movement characteristics of the law of illegal behavior of the target feature you can also describe the movement characteristics of the law of illegal behavior from the spatial dimension. Specifically, you can describe the movement characteristics of the law of illegal behavior of the target characteristic according to different position points, which are not limited here.
  • other dimensions, such as appearance characteristics can also be added to describe the motion characteristics of the illegal behavior of the target characteristics.
  • the structure of a specific convolutional neural network model can be implemented through convolutional layers, fully connected layers, and pooling layers.
  • the convolutional layer here is equivalent to the hidden layer of the convolutional neural network. It can be a multilayer structure. After collecting the traffic violation laws of each target feature, a deeper level of illegal behavior feature vector is extracted through the convolution layer of the convolutional neural network model, and the illegal behavior feature vector is further fitted and trained according to the known different types of illegal behavior ;
  • the pooling layer is often inserted at intervals in the continuous convolutional layer;
  • the fully connected layer here is similar to the convolutional layer, the neurons of the convolutional layer and the previous one The local areas of the layer output are connected. Of course, in order to reduce too many output feature vectors, two fully connected layers can be set. After the illegal behavior feature vector is trained by several convolutional layers, the training output feature vector is integrated.
  • step S104 a traffic accident contingency report is generated according to the behavior detection results of each target feature.
  • the behavior detection result of each target feature is the type of illegal behavior corresponding to the behavior rule of the target feature, for example, the type of illegal behavior such as target vehicle retrograde or target characteristic pedestrian crossing the red light, etc.
  • the type of illegal behavior, illegal behavior can be recorded in the traffic accident determination report
  • the accident party corresponding to the behavior and the information of the accident party for example, the type of illegal behavior is rear-end, the accident party corresponding to the illegal behavior is vehicle A, and the accident party information may include the owner's personal information.
  • traffic accidents may involve unilateral or multiple parties' coordinated handling.
  • rear-end traffic accidents need to be handled by both car owners and insurance companies.
  • Vehicle retrograde traffic accidents need to be handled by retrograde car owners. Therefore, different types of illegal acts need to be generated. Traffic accident responsibility reports suitable for different processing parties, so that different processing parties can handle traffic accidents in a timely manner.
  • the embodiment of the present application does not need to arrange a traffic police to investigate the accident site, reduces the workload of the traffic police, and can determine the responsibility for illegal acts during the driving of the vehicle. Increase driver awareness of traffic regulations.
  • FIG. 2 is a flowchart of a method for determining a traffic accident according to a preferred embodiment of the present application. As shown in FIG. 2, the method includes the following steps:
  • Step S201 Obtain a video information stream in a preset time period corresponding to a traffic accident.
  • the video information stream here may be vehicle driving data captured by a road camera or vehicle driving data recorded by a vehicle driving device, which is not limited in the embodiment of the present application.
  • a road monitoring camera provided at an intersection can be connected through a preset interface.
  • the road monitoring camera can capture the rear of the vehicle, monitor the vehicle for running through the red light, not follow the guidance, illegal lane changes, pressure line, retrograde and other illegal acts, and then obtain the vehicle driving
  • the video stream data in the process can also be read by reading the memory card of the car driving recorder.
  • the car driving recorder records the video images and sounds during the driving of the vehicle, monitors the surrounding environment of the vehicle, and then obtains the data during the driving of the vehicle. Video streaming data.
  • Step S202 Perform frame processing on the video information stream within a preset time period to obtain multi-frame video image information.
  • the information contained in a video stream can be divided into spatial information and time information.
  • the spatial information is expressed in the form of each frame of the video, such as scene objects appearing in the video.
  • the time information is between frames and frames.
  • the form of motion changes is manifested, such as the movement of objects in the scene.
  • the video information stream is framed within a preset time period to obtain multi-frame video image information. Analyze each frame of video images to obtain more image information.
  • the embodiment of the present application may also use an optical flow image calculation method to obtain an optical flow image sequence corresponding to a video image sequence, and the optical flow image sequence may reflect the motion between two consecutive images.
  • the optical flow image sequence needs to be processed in advance to ensure that the length of the optical flow image sequence is consistent with the length of the original image sequence.
  • step S203 each target feature is extracted from each frame of video image information.
  • the correlation between multiple frames of video image information can be used to specifically partition each frame of video image information to obtain multiple target regions, for example, the position information of the target feature A in the first 5 frames of video images
  • the position information of the target feature B is located on the right side of the video image.
  • the video image can be divided into two areas.
  • the target area can be adjusted in real time to further detect the target area.
  • each target feature is extracted from each frame of video image information.
  • the detection method of target features common to static images is different.
  • the target feature detection of dynamic images usually uses continuous frames of video information streams. Relationship to locate the area of interest.
  • target features often have motion features.
  • Multi-frame video images need to be introduced, so that not only the appearance information of the target features in the multi-frame video images, but also the motion information of the target features in the multi-frame images.
  • the target features in each frame of video image information can be specifically implemented based on motion segmentation or background extraction.
  • the current road, the vehicle being traveled by the accident party, and the accident are extracted from the current road camera monitoring vehicle driving image. Parties, road red and white lines, and other target characteristics.
  • step S204 the correlation between the multi-frame video image information is used to filter each target feature in each frame of video image information, and the target features that meet the preset conditions are retained.
  • the target features in the video image may have some unqualified features, such as features that appear in the current frame and disappear in the next frame, or although they appear in every frame, they have nothing to do with traffic accidents.
  • the target feature takes into account the persistence of the target feature in consecutive frames, such as the consistency of the size, color, and trajectory. It can delete the unqualified target feature in the video image and retain the target feature that meets the preset conditions.
  • Step S205 Clustering each target feature in the video information stream to obtain a behavior rule corresponding to each target feature clustering set in a preset time period corresponding to a traffic accident.
  • target feature clusters can be continuous movement or stationary data of target features.
  • the target feature pedestrian C appears in the first 8 frames of the image.
  • the target feature pedestrian C in the first 8 frames of the image is clustered to obtain Pedestrian C's behavior of breaking red lights.
  • each target feature cluster is combined with a corresponding behavior rule and input to a pre-built illegal behavior detection model for behavior detection to obtain behavior detection results for each target feature.
  • each target feature cluster is combined with corresponding behavioral laws and inputted into a pre-built illegal behavior detection model to perform behavior detection, and the types of illegal behaviors of each target characteristic's behavioral laws are detected without the need for traffic police to conduct accident investigation.
  • Step S210 Find processor information corresponding to the illegal behavior of the target feature according to the behavior detection result of each target feature.
  • the target feature can be initially divided into the responsible party, and the behavior law of the detection result as the target feature is legal. , Can be initially divided into non-responsible parties, so as to determine the responsible party and non-responsible party.
  • the processor information may be different for different detection results, the processor information corresponding to the type of illegal behavior of the target feature is found according to the behavior detection results of each target feature. Specifically, the identification information corresponding to the target feature can be obtained from the video information stream. According to the identification information, the processor information corresponding to the target characteristic illegal behavior type is found.
  • the target feature is vehicle A
  • the detection result is the type of illegal behavior of vehicle A running a red light.
  • the vehicle license plate number of vehicle A is further obtained from the video information stream, and the corresponding processor information of vehicle A, such as vehicle owner information, Car insurance information, etc.
  • a traffic accident may involve a single party, two parties, or even multiple parties.
  • the target characteristic divided into the responsible party is not a person.
  • the responsible party may be transferred to the target.
  • the owner of the feature is the same.
  • the target feature classified as a non-responsible party is not a person, the non-responsible party can be transferred to the owner of the target feature.
  • step S211 the processor information corresponding to the target characteristic illegal behavior type is filled into a preset report template to generate a traffic accident liability report.
  • the preset report template can record the types of illegal acts, the parties involved in the accident, and the party responsible for the accident.
  • the police handler needs to know the type of traffic accident and the parties involved in the traffic accident.
  • Information, the traffic accident contingency report sent by the police processor mainly includes information such as the type of the accident, the party responsible for the accident, and the parties to the accident, while the insurance company processor needs to understand the responsibility of the traffic accident in order to determine the subsequent claims, it will send the insurance company processor's accident
  • the responsibility report mainly includes the party responsible for the accident, the parties to the accident, and the accident compensation plan.
  • the traffic accident accreditation report can also record the time information and location information of the traffic accident.
  • the time information can be the time of the traffic accident, the time of the vehicle collision, etc.
  • the location information can be the location of the vehicle collision. , Accident traffic light location, etc.
  • the traffic accident responsibility report can also record the identification information of the target feature, such as the character name, license plate number, road marking and other information.
  • each target feature in the video stream information in a preset time period corresponding to a traffic accident is clustered, and the behavior corresponding to each target feature clustering set in the preset time period corresponding to a traffic accident is obtained.
  • Law by inputting the behavior law corresponding to the behavior law of each target feature clustering set into a pre-built illegal behavior detection model for behavior detection, the behavior detection results of each target feature are obtained, and the responsibility for the traffic accident is determined.
  • the embodiment of the present application does not need to arrange a traffic police to investigate the accident site, reduces the workload of the traffic police, and can determine the responsibility for illegal acts during the driving of the vehicle. Increase driver awareness of traffic regulations.
  • step S209 an illegal behavior detection model needs to be constructed, which can be implemented specifically through the following steps S206 to S208.
  • the process of constructing an illegal behavior detection model is not limited to be executed after steps S201 to S205.
  • the specific steps of constructing an illegal behavior detection model include the following steps:
  • step S206 video stream data corresponding to different traffic violations are collected in advance, and behavioral laws of multiple target features are extracted from the video stream data.
  • specific cases of different traffic violations can be collected in advance, such as rear-end accidents, overtaking accidents, left-turn accidents, and right-turn accidents, etc., to further obtain video stream data corresponding to different traffic violations, through step S202 Go to step S205, process the video stream data corresponding to different illegal actions, thereby extracting the behavior rules of multiple target features.
  • the specific processing process and extraction process are not described in detail here.
  • Step S207 Mark the behavior rules of the multiple target features according to the illegal behavior type corresponding to each target feature, and obtain multiple behavior rules that carry a label of the illegal behavior type.
  • the behavioral laws of multiple target characteristics are marked according to the type of illegality corresponding to each target characteristic, and the behavioral rules that carry the label of the type of illegal behavior are obtained, such as marking a red light.
  • the behavior law of the type of illegal behavior is marked with the behavior law of the type of rear-end illegal behavior.
  • step S208 the plurality of behavior laws carrying the illegal behavior type labels are input as sample data to a convolutional neural network for training, and an illegal behavior detection model is constructed.
  • the convolutional neural network here is a multi-layered network model.
  • the process of inputting a plurality of behavior laws with illegal behavior type labels into the convolutional neural network for training may specifically include: detecting illegal behaviors
  • the convolutional layer of the model extracts the local behavioral characteristics of the behavioral rules corresponding to the clustering set, and summarizes the local behavioral characteristics of the behavioral rules corresponding to the clustering set through the fully connected layer of the illegal behavior detection model to obtain multi-dimensional local behavioral characteristics.
  • the pooling layer of the behavior detection model performs multidimensional dimensionality reduction on the local behavior characteristics to obtain the illegal behavior characteristics corresponding to the clustering set.
  • the classification layer of the illegal behavior detection model is used to classify the illegal behavior characteristics corresponding to the clustering set. Carrying out the behavior detection results of each target characteristic illegal behavior type.
  • target features will appear larger in the video and have certain regular deformations.
  • the embodiments of the present application learn by Deformation laws, summarize the motion characteristics and behavior paradigms of different traffic violations, and then detect whether the target characteristics meet the preset behavior changes.
  • Common behavior characteristics and motion laws are represented by 3D descriptors, Markov-based shape dynamics, pose / primtive action-based histogram and more.
  • the illegal behavior detection model can be used to detect whether the behavioral characteristics of the target features meet the behavioral laws corresponding to the types of illegal behaviors and obtain the detection results.
  • FIG. 4 is a structural block diagram of a device for determining a traffic accident according to an embodiment of the present application.
  • the apparatus includes an obtaining unit 301, a clustering unit 302, a detecting unit 303, and a generating unit 304.
  • the obtaining unit 301 may be configured to obtain a video information stream in a preset time period corresponding to a traffic accident;
  • the clustering unit 302 may be configured to cluster each target feature in the video information stream to obtain a behavior rule corresponding to each target feature clustering set within a preset time period corresponding to a traffic accident;
  • the detecting unit 303 may be configured to input a behavior rule corresponding to each target feature clustering set into a pre-built illegal behavior detection model for behavior detection, and obtain behavior detection results of each target characteristic.
  • the illegal behavior detection The model is used to detect the types of illegal behaviors of each target characteristic within a preset time period corresponding to a traffic accident;
  • the generating unit 304 may be configured to generate a traffic accident contingency report according to the behavior detection result of each target feature.
  • the embodiment of the present application does not need to arrange a traffic police to conduct an investigation at the accident site, reduces the workload of the traffic police, and can condemn illegal acts during the driving of the vehicle. Increase driver awareness of traffic regulations.
  • FIG. 5 is a schematic structural diagram of another traffic accident determination device according to an embodiment of the present application. As shown in FIG. 5, the device further includes:
  • the frame framing unit 305 may be configured to perform clustering on each target feature in the video information stream and obtain the regular motion behavior of each target feature within a preset time period corresponding to the occurrence of a traffic accident.
  • the video information stream within the frame is processed to obtain multi-frame video image information;
  • the first extraction unit 306 may be configured to extract each target feature from each frame of video image information
  • the filtering unit 307 may be configured to filter each target feature in each frame of video image information by using the correlation between the multiple frame video image information after extracting each target feature from the video image information of each frame, and keep matching Target characteristics of preset conditions;
  • the second extraction unit 308 may be configured to collect different behaviors in advance before inputting the motion behavior rules of the various target features into a pre-built illegal behavior detection model to obtain behavior detection results of the target features.
  • the marking unit 309 may be configured to mark behavior rules of the multiple target features according to the types of illegal behaviors corresponding to each target feature, to obtain multiple traffic illegal behavior rules with labels of the illegal behavior types;
  • a constructing unit 310 may be configured to input the plurality of behavior laws with illegal behavior type tags as sample data into a convolutional neural network for training, and construct an illegal behavior detection model.
  • the illegal behavior detection model records target features. The relationship between the law of behavior and the type of illegal behavior.
  • the first extraction unit 306 includes:
  • a dividing module 3061 which may be used to use the correlation between multiple frames of video image information to divide each frame of video image information into regions to obtain multiple target regions;
  • the extraction module 3062 may be configured to detect target features with changing characteristics in the target area, and extract each target feature from each frame of video image information.
  • the illegal behavior detection model is a multi-layered network model
  • the detection unit 303 includes:
  • An extraction module 3031 may be used to extract local behavior characteristics of the behavior rules corresponding to the clustering set through a convolution layer of the illegal behavior detection model;
  • a summary module 3032 may be used to summarize the local behavior characteristics of the behavior rules corresponding to the clustering set through the fully connected layer of the illegal behavior detection model to obtain multi-dimensional local behavior characteristics;
  • the dimension reduction module 3033 may be configured to perform dimension reduction processing on the multi-dimensional local behavior features through the pooling layer of the illegal behavior detection model to obtain the illegal behavior features corresponding to the clustering set;
  • the classification module 3034 may be configured to classify the illegal behavior characteristics corresponding to the clustering set through the classification layer of the illegal behavior detection model, and obtain behavior detection results carrying the types of illegal behaviors of each target feature.
  • the generating unit 304 includes:
  • a search module 3041 which may be configured to search for processor information corresponding to a target feature illegal behavior type according to the behavior detection result of each target feature;
  • the generating module 3042 may be configured to fill the processor information corresponding to the target characteristic illegal behavior type into a preset report template to generate a traffic accident liability report.
  • Fig. 6 is a block diagram of a device 400 for determining a traffic accident according to an exemplary embodiment.
  • the device 400 may be a computer device, and the device 400 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness equipment, a personal digital assistant, and the like.
  • the device 400 may include one or more of the following components: a processing component 402, a memory 404, a power component 406, a multimedia component 408, an audio component 410, an I / O (Input / Output) interface 412, A sensor component 414, and a communication component 416.
  • the processing component 402 generally controls the overall operation of the device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • the processing component 402 may include one or more processors 420 to execute instructions to complete all or part of the steps of the method described above.
  • the processing component 402 may include one or more modules to facilitate the interaction between the processing component 402 and other components.
  • the processing component 402 may include a multimedia module to facilitate the interaction between the multimedia component 408 and the processing component 402.
  • the memory 404 is configured to store various types of data to support operation at the device 400. Examples of such data include instructions for any application or method for operating on the device 400, contact data, phone book data, messages, pictures, videos, and the like.
  • the memory 404 can be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as SRAM (Static Random Access Memory, Static Random Access Memory), EEPROM (Electrically-Erasable Programmable Read-Only Memory, Electrical Erasable Programmable Read Only Memory (EPROM), EPROM (Erasable Programmable Read Only Memory), PROM (Programmable Read-Only Memory), ROM (Read-Only Memory, Read-only memory), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM Static Random Access Memory
  • Static Random Access Memory Static Random Access Memory
  • EEPROM Electrical Erasable Programmable Read Only Memory
  • EPROM Electrical Erasable Programmable Read Only Memory
  • PROM Programmable Read-Only Memory
  • ROM Read-
  • the power supply component 406 provides power to various components of the device 400.
  • the power component 406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 400.
  • the multimedia component 408 includes a screen that provides an output interface between the device 400 and a user.
  • the screen may include an LCD (Liquid Crystal Display) and a TP (Touch Panel). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense a boundary of a touch or slide action, but also detect duration and pressure related to the touch or slide operation.
  • the multimedia component 408 includes a front camera and / or a rear camera. When the device 400 is in an operation mode, such as a shooting mode or a video mode, the front camera and / or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 410 is configured to output and / or input audio signals.
  • the audio component 410 includes a MIC (Microphone).
  • the microphone is configured to receive an external audio signal.
  • the received audio signal may be further stored in the memory 404 or transmitted via the communication component 416.
  • the audio component 410 further includes a speaker for outputting audio signals.
  • the I / O interface 412 provides an interface between the processing component 402 and a peripheral interface module.
  • the peripheral interface module may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
  • the sensor component 414 includes one or more sensors for providing status assessment of various aspects of the device 400.
  • the sensor component 414 can detect the on / off state of the device 400 and the relative positioning of the components, such as the display and keypad of the device 400.
  • the sensor component 414 can also detect the change in the position of the device 400 or a component of the device 400.
  • the user The presence or absence of contact with the device 400, the orientation or acceleration / deceleration of the device 400, and the temperature change of the device 400.
  • the sensor component 414 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • the sensor component 414 may also include a light sensor, such as a CMOS (Complementary Metal Oxide Semiconductor) or a CCD (Charge-coupled Device) image sensor, for use in imaging applications.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • the sensor component 414 may further include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • the communication component 416 is configured to facilitate wired or wireless communication between the device 400 and other devices.
  • the device 400 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
  • the communication component 416 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel.
  • the communication component 416 further includes an NFC (Near Field Communication) module to facilitate short-range communication.
  • the NFC module can be based on RFID (Radio Frequency Identification) technology, IrDA (Infra-red Data Association) technology, UWB (Ultra Wideband) technology, BT (Bluetooth, Bluetooth) technology and Other technologies to achieve.
  • the device 400 may be implemented by one or more ASIC (Application Specific Integrated Circuit), DSP (Digital Signal Processor), DSPD (Digital Signal Processor, Device) Equipment), PLD (Programmable Logic Device), FPGA) (Field Programmable Gate Array), controller, microcontroller, microprocessor or other electronic components to implement the above Methods for determining traffic accidents.
  • ASIC Application Specific Integrated Circuit
  • DSP Digital Signal Processor
  • DSPD Digital Signal Processor, Device
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • controller microcontroller
  • a non-transitory computer non-volatile readable storage medium including instructions may be executed by the processor 420 of the apparatus 400 to complete the above method.
  • the non-transitory computer non-volatile readable storage medium may be ROM, RAM (Random Access Memory), CD-ROM (Compact Disc Read-Only Memory), magnetic tape , Floppy disks, and optical data storage devices.
  • a non-transitory computer non-volatile readable storage medium when instructions in the non-volatile readable storage medium are executed by a processor of a traffic accident determination device, the traffic accident determination device is capable of Implementation of the above-mentioned traffic accident determination method.
  • modules or steps of the present application can be implemented by general-purpose computer equipment, which can be centralized on a single computer equipment or distributed on a network composed of multiple computer equipment
  • they may be implemented with computer-readable instructions of a computer device, so that they may be stored in a storage device and executed by a computing device, and in some cases, may be in a different order than here
  • the steps shown or described are performed either by making them into individual integrated circuit modules or by making multiple modules or steps into a single integrated circuit module. As such, this application is not limited to any particular combination of hardware and software.

Abstract

一种交通事故的定责方法、装置及计算机非易失性可读存储介质,涉及交通事故处理技术领域,无需安排交警人员到事故现场进行勘察,减少交警的工作量。所述方法包括:获取交通事故发生所对应预设时间段内的视频信息流(S101);对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律(S102);将所述每个目标特征聚类集合对应输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果(S103),所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;根据所述各个目标特征的行为检测结果,生成交通事故定责报告(S104)。

Description

交通事故的定责方法、装置及计算机可读存储介质
本申请要求于2018年8月1日提交中国专利局、申请号为2018108656430、申请名称为“交通事故的定责方法、装置、计算机设备及计算机存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及交通事故处理技术领域,尤其是涉及到交通事故的定责方法、装置及计算机非易失性可读存储介质。
背景技术
随着社会的发展,汽车已经成为人类的主要交通工具,为人民出行带来方便,但是在城市汽车的数量激增的同时,驾驶人员的危机意识并未得到同步提升,使得汽车交通事故逐年增长,导致交警的工作内容与工作强度也不断增加。
现有技术在对交通事故进行定责的过程中,简单的交通事故,比如车辆追尾、闯红绿灯、冲入绿化带、违规变道行驶等违法行为,在发生交通事故后,交警需要根据事故类型出警勘察,或者根据现场图片信息进行交通事故责任划分界定,对交警而言,耗时耗力,在负担其他日常工作的同时,需要花费精力在出警以及现场图片查看上,极大地浪费了人力成本。与此同时,驾驶人员更关注的交警定责结果,且责任的划分可能只会对主责的一方有警示提醒效果,并不能达到交通安全意识的唤醒。
发明内容
本申请实施例提供了交通事故的定责方法、装置及计算机非易失性可读存储介质,解决了相关技术中需要耗费极大人力判断交通事故责任的问题。
根据本申请实施例的第一方面,提供一种交通事故的定责方法,所述方法包括:
获取交通事故发生所对应预设时间段内的视频信息流;
对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述行为检测结果用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
根据本申请实施例的第二方面,提供一种交通事故的定责装置,所述装置包括:
获取单元,用于获取交通事故发生所对应预设时间段内的视频信息流;
聚类单元,用于对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
检测单元,用于将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
生成单元,用于根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
根据本申请实施例的第三方面,提供一种计算机非易失性可读存储介质,其上存储有计算机可读指令,该程序被处理器执行时实现以下步骤:
获取交通事故发生所对应预设时间段内的视频信息流;
对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
根据本申请实施例的第四方面,提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,所述处理器执行所述程序时实现以下步骤:
获取交通事故发生所对应预设时间段内的视频信息流;
对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
通过本申请,对交通事故发生所对应预设时间段内的视频流信息中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律,通过将每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到各个目标特征的行为检测结果,确定交通事故责任。与现有技术中通过交警 到事故现场进行定责的方法相比,本申请实施例无需安排交警到事故现场进行勘察,减少交警工作量,并且能够对车辆行驶过程中的违法行为进行定责,提高驾驶人员遵守交通规则的意识。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种交通事故的定责方法的流程图;
图2是根据本申请实施例的另一种交通事故的定责方法的流程图;
图3是根据本申请实施例构建违法行为检测模型的流程图;
图4是根据本申请实施例的一种交通事故的定责装置的结构框图;
图5是根据本申请实施例的另一种交通事故的定责装置的结构框图;
图6是根据本申请实施例的交通事故的定责装置400的框图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本申请。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
在本实施例中提供了一种交通事故的定责方法,图1是根据本申请实施例的流程图一,如图1所示,该流程包括如下步骤:
步骤S101,获取交通事故发生所对应预设时间段内的视频信息流;
其中,预设时间段可以为交通事故发生的前30秒、60秒等,当然该预设时间段也可以为交通事故发生的时间区间,如交通事故发生时间为9点10分,则选取预设时间段为9点至9点20分,本申请实施例不进行限定。
本申请实施例通过获取交通事故发生所对应预设时间段内的视频信息流,对交通事故发生过程的视频进行调取,进而分析交通事故发生过程,以便对事故发生进行定责。
步骤S102,对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
其中,目标特征为交通事故发生所对应预设时间段内的视频信息流中出现的特征,例如,车辆、斑马线、草坪、人物等特征,具体可以通过时间差分法或者光流法检测出视频信息流中的各个目标特征,进而将目标特征从检测到的运动区域中提取出来。
对于本申请实施例,通过对各个目标特征进行聚类,将同一目标特征划分到一个聚类 中,得到各个目标特征的聚类集合,每个目标特征的聚类集合相当于视频信息流中检测出的该目标特征的集合。
其中,目标特征聚类集合对应的行为规律可以为目标特征在视频信息流中的移动轨迹等,如车辆m的移动轨迹为由位置a移动到位置b,行人n的移动根轨迹为原地不动,行人n的移动轨迹为过马路,本申请实施例不进行限定。
需要说明的是,上述目标特征的行为规律不仅包括了目标特征在视频信息流中的移动轨迹,还可以包括目标特征的颜色、大小以及形状变化等外观特征,例如,目标特征为行人p,行人p的外观特征为倒在地上,目标特征为车辆,车辆的外观特征为发生形变。
由于视频信息流中会出现不同的目标特征,并且不同的目标特征在视频信息流中的移动轨迹或者外形表现有所不同,本申请实施例通过对视频信息流中各个目标特征进行聚类,形成目标特征聚类集合,该目标特征聚类集合可以为目标特征的连续变化或者静止的数据,进一步可以确定交通事故发生所对应预设时间段内目标特征聚类集合对应的行为规律。
步骤S103,将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果。
其中,违法行为检测模型是用于检测目标特征行为规律对应的违法行为类型所构建的检测模型,该违法行为检测模型记录有目标特征的行为规律与违法行为类型的映射关系,以便于对交通事故中各个目标特征的行为规律进行识别。行为检测结果为目标特征的违法行为类型,如车辆行驶方向违法、行人是红绿灯等。
对于本申请实施例,这里的卷积神经网络由多层结构组成,每层结构具有不同的输入输出参数以及实现不同的功能,可以通过反复训练预先收集的各个目标特征的交通违法行为规律,总结出不同违法行为的运动特征和行为范式,构建违法行为检测模型,该违法行为检测模型记录有目标特征的行为规律与违法行为类型的映射关系,通过该违法行为检测模型可以检测目标特征的行为规律对应的违法行为类型。
需要说明的是,上述各个目标特征的交通违法行为规律可以为多维度描述的交通违法行为规律的运动行为特征,例如,可以采用时间维度来描述违法行为规律的运动特征,具体可以根据不同时间点来描述目标特征的违法行为规律的运动特征,还可以从空间维度来描述违法行为规律的运动特征,具体可以根据不同的位置点来描述目标特征的违法行为规律的运动特征,这里不进行限定,当然还可以加入其它维度,如外观特征等维度来描述目标特征的违法行为规律的运动特征。
具体卷积神经网络模型的结构可以通过卷积层、全连接层以及池化层等结构实现,这 里的卷积层相当于卷积神经网络的隐含层,可以为多层结构,当输入预先收集的各个目标特征的交通违法行为规律后,通过卷积神经网络模型的卷积层提取出更深层次的违法行为特征向量,进一步根据已知的不同违法行为类型对违法行为特征向量进行拟合训练;在卷积神经网络模型中,为了减小参数,减低计算,常常在连续卷积层中间隔插入池化层;这里的全连接层与卷积层相似,卷积层的神经元和上一层输出局部区域相连,当然为了减少输出特征向量过多,可以设置两个全连接层,在违法行为特征向量通过若干个卷积层训练后对训练输出的特征向量进行整合。
步骤S104,根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
其中,各个目标特征的行为检测结果为目标特征的行为规律对应的违法行为类型,例如,目标特征车辆逆行或者目标特征行人闯红灯等违法行为类型,交通事故定责报告中可以记录违法行为类型、违法行为对应的事故方以及事故方信息等,例如,违法行为类型为追尾,违法行为对应的事故方为车辆A,事故方信息可以包括车主个人信息等。
对于本申请实施例,根据各个目标特征的行为检测结果,可以简单了解交通事故经过以及交通事故对应的事故方。另外,由于交通事故可能会涉及单方或者多方进行协调处理,例如追尾类交通事故需要双方车主以及保险公司进行处理,车辆逆行类交通事故则需要逆行车主进行处理,因此,针对不同违法行为类型需要生成适于不同处理方的交通事故定责报告,以便于不同处理方及时处理交通事故。
通过本申请,对交通事故发生所对应预设时间段内的视频流信息中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律,通过将每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到各个目标特征的行为检测结果,确定交通事故责任。与现有技术中通过交警到事故现场进行定责的方法相比,本申请实施例无需安排交警到事故现场进行勘察,减少交警工作量,并且能够对车辆行驶过程中的违法行为进行定责,提高驾驶人员遵守交通规则的意识。
图2是根据本申请优选实施例的交通事故的定责方法的流程图,如图2所示,该方法包括以下步骤:
步骤S201,获取交通事故发生所对应预设时间段内的视频信息流。
对于本申请实施例,这里的视频信息流可以为道路摄像头拍摄的汽车行驶数据或者汽车行驶仪记录的车辆行驶数据,本申请实施例不进行限定。
具体可以通过预设接口连接设置在十字路口的道路监控摄像头,该道路监控摄像头可以拍摄车尾,监控车辆闯红灯,不按导向行驶,违法变道、压线、逆行等违法行为,进而 获取车辆行驶过程中的视频流数据,具体还可以通过读取汽车行驶记录仪的存储卡,该汽车行车记录仪记录有车辆行驶过程中的视频图像和声音,监控车辆周围环境,进而获取车辆行驶过程中的视频流数据。
步骤S202,对预设时间段内的视频信息流进行分帧处理,得到多帧视频图像信息。
通常一段视频信息流所包含的信息可以分为空间信息与时间信息,空间信息以视频中每一帧图像的形式表现出来,如视频中出现的场景物体等,时间信息则以帧与帧之间运动变化的形式表现出来,如场景中物体的运动,为了更好的获取视频中的空间信息以及时间信息,对预设时间段内视频信息流进行分帧处理,得到多帧视频图像信息,通过对每一帧视频图像进行分析,从而获取更多的图像信息。
当然为了得到更多的运动信息,本申请实施例还可以利用光流图计算方法获取视频图像序列对应的光流图像序列,光流图像序列可以反映两张连续图像之间的运动。当然由于光流图像序列与原始视频图像序列的长度有所不同,需要预先对光流图像序列进行处理,保证光流图像序列的长度与原始图像序列的长度一致。
步骤S203,从每帧视频图像信息中提取出各个目标特征。
对于本申请实施例,具体可以利用多帧视频图像信息之间的关联性对每帧视频图像信息进行区域划分,得到多个目标区域,例如,前5帧视频图像中具有目标特征A的位置信息为视频图像中的左边,目标特征B的位置信息位于视频图像的右边,可以将视频图像划分为两个区域,当然随着目标特征的数量以及位置变化,可以实时调整目标区域,进一步检测目标区域内具有变化特性的目标特征,从每帧视频图像信息中提取各个目标特征。
需要说明的是,由于视频信息流中记录的是交通事故发生时的动态图像,与静态图像通用的目标特征的检测方法有所不同,动态图像的目标特征检测通常会利用视频信息流连续帧之间的关系来对感兴趣的区域进行定位。在视频图像中目标特征往往具有运动特征,需要引入多帧视频图像,这样不仅可以获得多帧视频图像中目标特征的外观信息,还可以获取目标特征在多帧图像中的运动信息。
对于本申请实施例,具体可以基于运动分割或者背景提取方式来实现每帧视频图像信息中的目标特征,例如,从当前道路摄像头中监控汽车行驶图像中提取出当前道路、事故当事人行驶车辆、事故当事人、道路红白线等多个目标特征。
步骤S204,利用多帧视频图像信息之间的关联性对每帧视频图像信息中各个目标特征进行过滤,保留符合预设条件的目标特征。
由于视频图像中目标特征可能会存在一部分不合格的特征,例如在当前帧出现的特征,而在下一帧消失的特征,或者虽然在每一帧都都有出现,但是与交通事故无任何关 系的目标特征考虑到连续帧中目标特征的持续性,如大小、颜色以及轨迹的一致性,可以删除视频图像中不合格的目标特征,保留符合预设条件的目标特征。
步骤S205,对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律。
由于每一帧视频图像信息中出现的目标特征的位置信息以及外观信息可能不相同,利用单帧图像中出现的目标特征的位置信息以及外观信息,对多帧图像中相同目标特征进行聚类,形成目标特征聚类结合对应的行为规律,可以为目标特征连续运动或者静止数据,例如,前8帧图像中出现有目标特征行人C,对前8帧图像中目标特征行人C进行聚类,得到目标特征行人C的闯红灯的行为规律。
步骤S209,将所述每个目标特征聚类结合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果。
对于本申请实施例,通过将各个目标特征聚类结合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,检测各个目标特征的行为规律的违法行为类型,无需交警现场进行事故勘察。
步骤S210,根据所述各个目标特征的行为检测结果查找目标特征违法行为类型对应的处理方信息。
对于本申请实施例,由于各个目标特征的行为检测结果可能有所不同,对于检测结果为目标特征的违法行为类型,可以初步将目标特征划分为责任方,对于检测结果为目标特征的行为规律合法,可以初步划分为非责任方,从而初步确定事故发生的责任方以及非责任方。由于不同检测结果可能涉及的处理方信息有所不同,根据各个目标特征的行为检测结果查找目标特征违法行为类型对应的处理方信息,具体可以通过从视频信息流中获取目标特征对应的标识信息,根据该标识信息查找目标特征违法行为类型对应的处理方信息。
例如,目标特征为车辆A,检测结果为车辆A闯红灯违法行为类型,进一步从视频信息流中获取车辆A的车牌号码,进一步根据车辆A的车牌号码查找车辆A对应处理方信息,如车主信息、车保信息等。
需要说明的是,交通事故可以能涉及单方、两方甚至多方,对于涉及两方以上的交通事故,可能存在划分为责任方的目标特征不是人物的情况,此时可以将责任方转移至该目标特征的归属人,同理,对于划分为非责任方的目标特征不是人物的情况,此时可以将非责任方转移至该目标特征的归属人。
步骤S211,将所述目标特征违法行为类型对应的处理方信息填入预设报告模板中,生成交通事故定责报告。
其中,预设报告模板可以记录违法行为类型、事故当事人、事故责任方等信息,根据不同交通事故处理方可以设定不通的报告模板,例如,警方处理方需要了解交通事故类型以及交通事故当事人等信息,则发送警方处理方的交通事故定责报告主要包括事故类型、事故责任方以及事故当事人等信息,而保险公司处理方需要了解交通事故责任以便对后续理赔判定,则发送保险公司处理方事故定责报告主要包括事故责任方以及事故当事人信息、事故理赔方案等。
当然为了方便进一步了解交通事故的细节信息,该交通事故定责报告还可以记录交通事故的时间信息、位置信息,时间信息可以为交通事故发生时间、车辆碰撞时间等,位置信息可以为车辆碰撞位置、事故红绿灯位置等,当然如果目标特征为人物或者车辆等,该交通事故定责报告还可以记录目标特征的标识信息,如人物名称、车牌号码、道路标记等信息。
通过本申请实施例,对交通事故发生所对应预设时间段内的视频流信息中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律,通过将每个目标特征聚类集合对应的行为规律的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到各个目标特征的行为检测结果,确定交通事故责任。与现有技术中通过交警到事故现场进行定责的方法相比,本申请实施例无需安排交警到事故现场进行勘察,减少交警工作量,并且能够对车辆行驶过程中的违法行为进行定责,提高驾驶人员遵守交通规则的意识。
需要说明的是,在执行步骤S209之前,需要构建违法行为检测模型,具体可以通过下述步骤S206至步骤S208实现,当然构建违法行为检测模型的过程并不限定在步骤S201至步骤S205之后执行,如图3所示,具体构建违法行为检测模型的包括以下步骤:
步骤S206,预先收集不同交通违法行为对应的视频流数据,从所述视频流数据中提取出多个目标特征的行为规律。
对于本申请实施例,具体可以通过预先收集不同交通违法行为的案例,例如,追尾事故、超车事故、左转弯事故以及右转弯事故等,进一步获取不同交通违法行为对应的视频流数据,通过步骤S202至步骤S205对不同违法行为对应的视频流数据进行处理,从而提取出多个目标特征的行为规律,在此对具体处理过程以及提取过程不进行赘述。
步骤S207,根据每个目标特征对应的违法行为类型对所述多个目标特征的行为规律进行标记,得到多个携带有违法行为类型标签的行为规律。
由于不同违法行为类型对应的行为规律不同,根据每个目标特征对应的违法行为类型对多个目标特征的行为规律进行标记,标记得到多个携带有违法行为类型标签的行为规 律,如标记有闯红灯违法行为类型的行为规律,标记有追尾违法行为类型的行为规律等。
步骤S208,将所述多个携带有违法行为类型标签的行为规律作为样本数据输入至卷积神经网络进行训练,构建违法行为检测模型。
对于本申请实施例,这里的卷积神经网络为多层结构的网络模型,将多个携带有违法行为类型标签的行为规律输入至卷积神经网络进行训练的过程具体可以包括:通过违法行为检测模型的卷积层提取聚类集合对应的行为规律的局部行为特征,通过违法行为检测模型的全连接层汇总聚类集合对应的行为规律的局部行为特征,得到多维度的局部行为特征,通过违法行为检测模型的池化层对多维度的局部行为特征进行降维处理,得到聚类集合对应的违法行为特征,通过违法行为检测模型的分类层对聚类集合对应的违法行为特征进行分类,得到携带有各个目标特征违法行为类型的行为检测结果。
需要说明的是,有些目标特征在视频中会呈现幅度较大的,有一定规律的形变,具体在提取各个目标特征的行为规律对应的行为特征以及运动规律的过程中,本申请实施例通过学习形变规律,总结不同交通违法行为的运动特征和行为范式,进而检测目标特征是否满足预设行为变化,常见的行为特征以及运动规律表示有3D descriptors,Markov-based shape dynamics,pose/primtive action-based histogram等等。
由于违法行为检测模型中记录有行为规律与不同违法行为类型的映射关系,通过违法行为检测模型可以逐个检测目标特征的行为规律是否满足违法行为类型对应的行为规律,得到检测结果。
图4是根据本申请实施例的一种交通事故的定责装置的结构框图。参照图4,该装置包括获取单元301,聚类单元302、检测单元303和生成单元304。
获取单元301,可以用于获取交通事故发生所对应预设时间段内的视频信息流;
聚类单元302,可以用于对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
检测单元303,可以用于将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
生成单元304,可以用于根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
通过本申请,对交通事故发生所对应预设时间段内的视频流信息中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律,通 过将每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到各个目标特征的行为检测结果,确定交通事故责任。与现有技术中通过交警到事故现场进行定责的方法相比,本申请实施例无需安排交警到事故现场进行勘察,减少交警工作量,并且能够对车辆行驶过程中的违法行为进行定责,提高驾驶人员遵守交通规则的意识。
作为图4中所示交通事故的定责装置的进一步说明,图5是根据本申请实施例另一种交通事故的定责装置的结构示意图,如图5所示,该装置还包括:
分帧单元305,可以用于在所述对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内各个目标特征的运动行为规律之前,对预设时间段内的视频信息流进行分帧处理,得到多帧视频图像信息;
第一提取单元306,可以用于从每帧视频图像信息中提取出各个目标特征;
过滤单元307,可以用于在所述从每帧视频图像信息中提取出各个目标特征之后,利用多帧视频图像信息之间的关联性对每帧视频图像信息中各个目标特征进行过滤,保留符合预设条件的目标特征;
第二提取单元308,可以用于在所述将所述各个目标特征的运动行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果之前,预先收集不同交通违法行为对应的视频流数据,从所述视频流数据中提取出多个目标特征的行为规律;
标记单元309,可以用于根据每个目标特征对应的违法行为类型对所述多个目标特征的行为规律进行标记,得到多个携带有违法行为类型标签的交通违法行为规律;
构建单元310,可以用于将所述多个携带有违法行为类型标签的行为规律作为样本数据输入至卷积神经网络进行训练,构建违法行为检测模型,所述违法行为检测模型中记录有目标特征的行为规律与违法行为类型的映射关系。
进一步地,所述第一提取单元306包括:
划分模块3061,可以用于利用多帧视频图像信息之间的关联性对每帧视频图像信息进行区域划分,得到多个目标区域;
提取模块3062,可以用于检测所述目标区域内具有变化特性的目标特征,从每帧视频图像信息中提取各个目标特征。
进一步地,所述违法行为检测模型为多层结构的网络模型,所述检测单元303包括:
提取模块3031,可以用于通过所述违法行为检测模型的卷积层提取所述聚类集合对应的行为规律的局部行为特征;
汇总模块3032,可以用于通过所述违法行为检测模型的全连接层汇总所述聚类集合对应的行为规律的局部行为特征,得到多维度的局部行为特征;
降维模块3033,可以用于通过所述违法行为检测模型的池化层对所述多维度的局部行为特征进行降维处理,得到聚类集合对应的违法行为特征;
分类模块3034,可以用于通过所述违法行为检测模型的分类层对所述聚类集合对应的违法行为特征进行分类,得到携带有各个目标特征违法行为类型的行为检测结果。
进一步地,所述生成单元304包括:
查找模块3041,可以用于根据所述各个目标特征的行为检测结果查找目标特征违法行为类型对应的处理方信息;
生成模块3042,可以用于将所述目标特征违法行为类型对应的处理方信息填入预设报告模板中,生成交通事故定责报告。
图6是根据一示例性实施例示出的一种交通事故的定责装置400的框图。例如,可以为一种计算机设备,装置400可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图6,装置400可以包括以下一个或多个组件:处理组件402,存储器404,电源组件406,多媒体组件408,音频组件410,I/O(Input/Output,输入/输出)的接口412,传感器组件414,以及通信组件416。
处理组件402通常控制装置400的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件402可以包括一个或多个处理器420来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件402可以包括一个或多个模块,便于处理组件402和其他组件之间的交互。例如,处理组件402可以包括多媒体模块,以方便多媒体组件408和处理组件402之间的交互。
存储器404被配置为存储各种类型的数据以支持在装置400的操作。这些数据的示例包括用于在装置400上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器404可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如SRAM(Static Random Access Memory,静态随机存取存储器),EEPROM(Electrically-Erasable Programmable Read-Only Memory,电可擦除可编程只读存储器),EPROM(Erasable Programmable Read Only Memory,可擦除可编程只读存储器),PROM(Programmable Read-Only Memory,可编程只读存储器),ROM(Read-Only Memory,只读存储器),磁存储器,快闪存储器,磁盘或光盘。
电源组件406为装置400的各种组件提供电力。电源组件406可以包括电源管理系统, 一个或多个电源,及其他与为装置400生成、管理和分配电力相关联的组件。
多媒体组件408包括在所述装置400和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括LCD(Liquid Crystal Display,液晶显示器)和TP(Touch Panel,触摸面板)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件408包括一个前置摄像头和/或后置摄像头。当装置400处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜系统或具有焦距和光学变焦能力。
音频组件410被配置为输出和/或输入音频信号。例如,音频组件410包括一个MIC(Microphone,麦克风),当装置400处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器404或经由通信组件416发送。在一些实施例中,音频组件410还包括一个扬声器,用于输出音频信号。
I/O接口412为处理组件402和外围接口模块之间提供接口,上述外围接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件414包括一个或多个传感器,用于为装置400提供各个方面的状态评估。例如,传感器组件414可以检测到设备400的打开/关闭状态,组件的相对定位,例如组件为装置400的显示器和小键盘,传感器组件414还可以检测装置400或装置400一个组件的位置改变,用户与装置400接触的存在或不存在,装置400方位或加速/减速和装置400的温度变化。传感器组件414可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件414还可以包括光传感器,如CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物)或CCD(Charge-coupled Device,电荷耦合元件)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件414还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件416被配置为便于装置400和其他设备之间有线或无线方式的通信。装置400可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信组件416经由广播信道接收来自外部广播管理系统的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件416还包括NFC(Near Field Communication, 近场通信)模块,以促进短程通信。例如,在NFC模块可基于RFID(Radio Frequency Identification,射频识别)技术,IrDA(Infra-red Data Association,红外数据协会)技术,UWB(Ultra Wideband,超宽带)技术,BT(Bluetooth,蓝牙)技术和其他技术来实现。
在示例性实施例中,装置400可以被一个或多个ASIC(Application Specific Integrated Circuit,应用专用集成电路)、DSP(Digital signal Processor,数字信号处理器)、DSPD(Digital signal Processor Device,数字信号处理设备)、PLD(Programmable Logic Device,可编程逻辑器件)、FPGA)(Field Programmable Gate Array,现场可编程门阵列)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述交通事故的定责方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机非易失性可读存储介质,例如包括指令的存储器404,上述指令可由装置400的处理器420执行以完成上述方法。例如,所述非临时性计算机非易失性可读存储介质可以是ROM、RAM(Random Access Memory,随机存取存储器)、CD-ROM(Compact Disc Read-Only Memory,光盘只读存储器)、磁带、软盘和光数据存储设备等。
一种非临时性计算机非易失性可读存储介质,当所述非易失性可读存储介质中的指令由交通事故的定责装置的处理器执行时,使得交通事故的定责装置能够执行上述交通事故的定责方法。
显然,本领域的技术人员应该明白,上述的本申请的各模块或各步骤可以用通用的计算机设备来实现,它们可以集中在单个的计算机设备上,或者分布在多个计算机设备所组成的网络上,可选地,它们可以用计算机设备的计算机可读指令来实现,从而,可以将它们存储在存储装置中由计算装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本申请不限制于任何特定的硬件和软件结合。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包括在本申请的保护范围之内。

Claims (20)

  1. 一种交通事故的定责方法,其特征在于,所述方法包括:
    获取交通事故发生所对应预设时间段内的视频信息流;
    对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
    将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
    根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
  2. 根据权利要求1所述的方法,其特征在于,在所述对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律之前,所述方法还包括:
    对所述预设时间段内的视频信息流进行分帧处理,得到多帧视频图像信息;
    从每帧视频图像信息中提取出所述各个目标特征。
  3. 根据权利要求2所述的方法,其特征在于,所述从每帧视频图像信息中提取出各个目标特征包括:
    利用多帧视频图像信息之间的关联性对每帧视频图像信息进行区域划分,得到多个目标区域;
    检测所述目标区域内具有变化特性的目标特征,从每帧视频图像信息中提取各个目标特征。
  4. 根据权利要求2所述的方法,其特征在于,在所述从每帧视频图像信息中提取出各个目标特征之后,所述方法还包括:
    利用多帧视频图像信息之间的关联性对每帧视频图像信息中各个目标特征进行过滤,保留符合预设条件的目标特征。
  5. 根据权利要求1所述的方法,其特征在于,在所述将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果之前,所述方法还包括:
    预先收集不同交通违法行为对应的视频流数据,从所述视频流数据中提取出多个目标特征的行为规律;
    根据每个目标特征对应的违法行为类型对所述多个目标特征的行为规律进行标记,得 到多个携带有违法行为类型标签的行为规律;
    将所述多个携带有违法行为类型标签的行为规律作为样本数据输入至卷积神经网络进行训练,构建违法行为检测模型,所述违法行为检测模型中记录有目标特征的行为规律与违法行为类型的映射关系。
  6. 根据权利要求1所述的方法,其特征在于,所述违法行为检测模型为多层结构的网络模型,所述将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果包括:
    通过所述违法行为检测模型的卷积层提取所述聚类集合对应的行为规律的局部行为特征;
    通过所述违法行为检测模型的全连接层汇总所述聚类集合对应的行为规律的局部行为特征,得到多维度的局部行为特征;
    通过所述违法行为检测模型的池化层对所述多维度的局部行为特征进行降维处理,得到聚类集合对应的违法行为特征;
    通过所述违法行为检测模型的分类层对所述聚类集合对应的违法行为特征进行分类,得到携带有各个目标特征违法行为类型的行为检测结果。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述各个目标特征的行为检测结果,生成交通事故定责报告包括:
    根据所述各个目标特征的行为检测结果查找目标特征违法行为类型对应的处理方信息;
    将所述目标特征违法行为类型对应的处理方信息填入预设报告模板中,生成交通事故定责报告。
  8. 一种交通事故的定责装置,其特征在于,所述装置包括:
    获取单元,用于获取交通事故发生所对应预设时间段内的视频信息流;
    聚类单元,用于对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
    检测单元,用于将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
    生成单元,用于根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
  9. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    分帧单元,用于在所述对所述视频信息流中各个目标特征进行聚类,得到交通事故发 生所对应预设时间段内各个目标特征的运动行为规律之前,对所述预设时间段内的视频信息流进行分帧处理,得到多帧视频图像信息;
    第一提取单元,用于从每帧视频图像信息中提取出所述各个目标特征。
  10. 根据权利要求9所述的装置,其特征在于,所述第一提取单元包括:
    划分模块,用于利用多帧视频图像信息之间的关联性对每帧视频图像信息进行区域划分,得到多个目标区域;
    提取模块,用于检测所述目标区域内具有变化特性的目标特征,从每帧视频图像信息中提取各个目标特征。
  11. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    过滤单元,用于在所述从每帧视频图像信息中提取出各个目标特征之后,利用多帧视频图像信息之间的关联性对每帧视频图像信息中各个目标特征进行过滤,保留符合预设条件的目标特征。
  12. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    第二提取单元,用于在所述将所述各个目标特征的运动行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果之前,预先收集不同交通违法行为对应的视频流数据,从所述视频流数据中提取出多个目标特征的行为规律;
    标记单元,用于根据每个目标特征对应的违法行为类型对所述多个目标特征的行为规律进行标记,得到多个携带有违法行为类型标签的交通违法行为规律;
    构建单元,用于将所述多个携带有违法行为类型标签的行为规律作为样本数据输入至卷积神经网络进行训练,构建违法行为检测模型,所述违法行为检测模型中记录有目标特征的行为规律与违法行为类型的映射关系。
  13. 根据权利要求8所述的装置,其特征在于,所述违法行为检测模型为多层结构的网络模型,所述检测单元包括:
    提取模块,用于通过所述违法行为检测模型的卷积层提取所述聚类集合对应的行为规律的局部行为特征;
    汇总模块,用于通过所述违法行为检测模型的全连接层汇总所述聚类集合对应的行为规律的局部行为特征,得到多维度的局部行为特征;
    降维模块,用于通过所述违法行为检测模型的池化层对所述多维度的局部行为特征进行降维处理,得到聚类集合对应的违法行为特征;
    分类模块,用于通过所述违法行为检测模型的分类层对所述聚类集合对应的违法行为 特征进行分类,得到携带有各个目标特征违法行为类型的行为检测结果。
  14. 根据权利要求13所述的装置,其特征在于,所述生成单元包括:
    查找模块,用于根据所述各个目标特征的行为检测结果查找目标特征违法行为类型对应的处理方信息;
    生成模块,用于将所述目标特征违法行为类型对应的处理方信息填入预设报告模板中,生成交通事故定责报告。
  15. 一种计算机非易失性可读存储介质,其上存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现交通事故的定责方法,包括:
    获取交通事故发生所对应预设时间段内的视频信息流;
    对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
    将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
    根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
  16. 根据权利要求15所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被处理器执行时实现在所述对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律之前,所述方法还包括:
    对所述预设时间段内的视频信息流进行分帧处理,得到多帧视频图像信息;
    从每帧视频图像信息中提取出所述各个目标特征。
  17. 根据权利要求16所述的计算机非易失性可读存储介质,其特征在于,所述计算机可读指令被处理器执行时实现所述从每帧视频图像信息中提取出各个目标特征包括:
    利用多帧视频图像信息之间的关联性对每帧视频图像信息进行区域划分,得到多个目标区域;
    检测所述目标区域内具有变化特性的目标特征,从每帧视频图像信息中提取各个目标特征。
  18. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现交通事故的定责方法,包括:
    获取交通事故发生所对应预设时间段内的视频信息流;
    对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律;
    将所述每个目标特征聚类集合对应的行为规律输入至预先构建的违法行为检测模型进行行为检测,得到所述各个目标特征的行为检测结果,所述违法行为检测模型用于检测交通事故发生所对应预设时间段内所述各个目标特征的违法行为类型;
    根据所述各个目标特征的行为检测结果,生成交通事故定责报告。
  19. 根据权利要求18所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时实现在所述对所述视频信息流中各个目标特征进行聚类,得到交通事故发生所对应预设时间段内每个目标特征聚类集合对应的行为规律之前,所述方法还包括:
    对所述预设时间段内的视频信息流进行分帧处理,得到多帧视频图像信息;
    从每帧视频图像信息中提取出所述各个目标特征。
  20. 根据权利要求19所述的计算机设备,其特征在于,所述处理器执行所述计算机可读指令时实现所述从每帧视频图像信息中提取出各个目标特征包括:
    利用多帧视频图像信息之间的关联性对每帧视频图像信息进行区域划分,得到多个目标区域;
    检测所述目标区域内具有变化特性的目标特征,从每帧视频图像信息中提取各个目标特征。
PCT/CN2018/111701 2018-08-01 2018-10-24 交通事故的定责方法、装置及计算机可读存储介质 WO2020024457A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810865643.0A CN108986474A (zh) 2018-08-01 2018-08-01 交通事故的定责方法、装置、计算机设备及计算机存储介质
CN201810865643.0 2018-08-01

Publications (1)

Publication Number Publication Date
WO2020024457A1 true WO2020024457A1 (zh) 2020-02-06

Family

ID=64550756

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/111701 WO2020024457A1 (zh) 2018-08-01 2018-10-24 交通事故的定责方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN108986474A (zh)
WO (1) WO2020024457A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415121A (zh) * 2020-03-31 2020-07-14 深圳前海微众银行股份有限公司 基于疫情防护的货物处理方法、设备及存储介质
CN112509315A (zh) * 2020-11-04 2021-03-16 杭州远眺科技有限公司 一种基于视频分析的交通事故检测方法
CN113362590A (zh) * 2021-05-07 2021-09-07 武汉理工大学 基于联网adas调查路域交通违法行为时空特征的方法
CN113642360A (zh) * 2020-04-27 2021-11-12 杭州海康威视数字技术股份有限公司 一种行为计时方法、装置、电子设备及存储介质
CN113706891A (zh) * 2020-05-20 2021-11-26 阿里巴巴集团控股有限公司 交通数据传输方法、装置、电子设备和存储介质
CN113895431A (zh) * 2021-09-30 2022-01-07 重庆电子工程职业学院 一种车辆检测方法及系统
CN115834621A (zh) * 2022-11-16 2023-03-21 山东新一代信息产业技术研究院有限公司 一种基于人工智能的事故快处装置和方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448359A (zh) * 2018-12-25 2019-03-08 东北林业大学 用于交通事故认定的快速处理系统及认定方法
CN109741602A (zh) * 2019-01-11 2019-05-10 福建工程学院 一种轻微交通事故辅助定责的方法及系统
CN109919140B (zh) * 2019-04-02 2021-04-09 浙江科技学院 车辆碰撞事故责任自动判定方法、系统、设备及存储介质
CN110135418A (zh) * 2019-04-15 2019-08-16 深圳壹账通智能科技有限公司 基于图片的交通事故定责方法、装置、设备及存储介质
CN110363220B (zh) * 2019-06-11 2021-08-20 北京奇艺世纪科技有限公司 行为类别检测方法、装置、电子设备和计算机可读介质
CN112712691A (zh) * 2019-10-24 2021-04-27 广州汽车集团股份有限公司 一种智慧交通事故处理方法及装置
CN111860383B (zh) * 2020-07-27 2023-11-10 苏州市职业大学 一种群体异常行为识别方法、装置、设备及存储介质
CN112102615B (zh) * 2020-08-28 2022-03-25 浙江大华技术股份有限公司 交通事故检测方法、电子设备及存储介质
CN112233421A (zh) * 2020-10-15 2021-01-15 胡歆柯 一种基于机器视觉的城市智慧交通监控智能系统
CN113873180A (zh) * 2021-08-25 2021-12-31 广东飞达交通工程有限公司 一种多视频检测器同一事件重复发现归并处理方法
CN114419106B (zh) * 2022-03-30 2022-07-22 深圳市海清视讯科技有限公司 车辆违章行为检测方法、设备及存储介质
CN114596711A (zh) * 2022-03-31 2022-06-07 北京世纪高通科技有限公司 事故责任的确定方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902955A (zh) * 2012-08-30 2013-01-30 中国科学技术大学 一种车辆行为的智能分析方法及系统
CN107424412A (zh) * 2017-09-21 2017-12-01 程丹秋 一种交通行为分析系统
CN107633570A (zh) * 2017-08-28 2018-01-26 武汉六点整北斗科技有限公司 一种交通事故的快撤实现方法及相关产品
CN107909113A (zh) * 2017-11-29 2018-04-13 北京小米移动软件有限公司 交通事故图像处理方法、装置及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004220423A (ja) * 2003-01-16 2004-08-05 Denso Corp 交差点内交通事故状況把握システムおよび電子ナンバープレート
CN102789690B (zh) * 2012-07-17 2014-08-20 公安部道路交通安全研究中心 违法车辆甄别方法及系统
CN103116987B (zh) * 2013-01-22 2014-10-29 华中科技大学 一种基于监控视频处理的车流统计和违规检测的方法
CN103258432B (zh) * 2013-04-19 2015-05-27 西安交通大学 基于视频的交通事故自动识别处理方法和系统
CN104680795B (zh) * 2015-02-28 2018-02-27 武汉烽火众智数字技术有限责任公司 一种基于局部区域特征的车型识别方法和装置
CN105070053B (zh) * 2015-07-21 2017-08-25 武汉理工大学 一种识别车辆违规运动模式的智能交通监控摄像机
CN107204114A (zh) * 2016-03-18 2017-09-26 中兴通讯股份有限公司 一种车辆异常行为的识别方法及装置
CN106530730A (zh) * 2016-11-02 2017-03-22 重庆中科云丛科技有限公司 交通违规检测方法及系统
CN108229407A (zh) * 2018-01-11 2018-06-29 武汉米人科技有限公司 一种视频分析中的行为检测方法与系统
CN108320348A (zh) * 2018-02-07 2018-07-24 广州道安信息科技有限公司 交通事故动态图像的生成方法及计算机装置、计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902955A (zh) * 2012-08-30 2013-01-30 中国科学技术大学 一种车辆行为的智能分析方法及系统
CN107633570A (zh) * 2017-08-28 2018-01-26 武汉六点整北斗科技有限公司 一种交通事故的快撤实现方法及相关产品
CN107424412A (zh) * 2017-09-21 2017-12-01 程丹秋 一种交通行为分析系统
CN107909113A (zh) * 2017-11-29 2018-04-13 北京小米移动软件有限公司 交通事故图像处理方法、装置及存储介质

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111415121A (zh) * 2020-03-31 2020-07-14 深圳前海微众银行股份有限公司 基于疫情防护的货物处理方法、设备及存储介质
CN111415121B (zh) * 2020-03-31 2023-08-22 深圳前海微众银行股份有限公司 基于疫情防护的货物处理方法、设备及存储介质
CN113642360A (zh) * 2020-04-27 2021-11-12 杭州海康威视数字技术股份有限公司 一种行为计时方法、装置、电子设备及存储介质
CN113706891A (zh) * 2020-05-20 2021-11-26 阿里巴巴集团控股有限公司 交通数据传输方法、装置、电子设备和存储介质
CN112509315A (zh) * 2020-11-04 2021-03-16 杭州远眺科技有限公司 一种基于视频分析的交通事故检测方法
CN112509315B (zh) * 2020-11-04 2022-02-15 杭州远眺科技有限公司 一种基于视频分析的交通事故检测方法
CN113362590A (zh) * 2021-05-07 2021-09-07 武汉理工大学 基于联网adas调查路域交通违法行为时空特征的方法
CN113895431A (zh) * 2021-09-30 2022-01-07 重庆电子工程职业学院 一种车辆检测方法及系统
CN113895431B (zh) * 2021-09-30 2023-07-04 重庆电子工程职业学院 一种车辆检测方法及系统
CN115834621A (zh) * 2022-11-16 2023-03-21 山东新一代信息产业技术研究院有限公司 一种基于人工智能的事故快处装置和方法

Also Published As

Publication number Publication date
CN108986474A (zh) 2018-12-11

Similar Documents

Publication Publication Date Title
WO2020024457A1 (zh) 交通事故的定责方法、装置及计算机可读存储介质
CN109804367B (zh) 使用边缘计算的分布式视频存储和搜索
US10152858B2 (en) Systems, apparatuses and methods for triggering actions based on data capture and characterization
US11205068B2 (en) Surveillance camera system looking at passing cars
WO2020134858A1 (zh) 人脸属性识别方法及装置、电子设备和存储介质
CN106004883A (zh) 车辆违规提醒的方法及装置
CN109523652B (zh) 基于驾驶行为的保险的处理方法、装置、设备及存储介质
WO2015117528A1 (zh) 行车记录处理方法及系统
CN109389827A (zh) 基于行车记录仪的举证方法、装置、设备和存储介质
CN110619277A (zh) 一种多社区智慧布控方法以及系统
WO2022227490A1 (zh) 行为识别方法、装置、设备、存储介质、计算机程序及程序产品
JP2023505122A (ja) クラウドソーシングされたオンデマンドaiデータの注釈、収集、処理
JP2020518165A (ja) 異なるデバイスによって生成されたビデオ画像、ピクチャ等のコンテンツの管理および検証のためのプラットフォーム
US11790658B2 (en) Investigation assist system and investigation assist method
CN112818839A (zh) 驾驶员违章行为识别方法、装置、设备及介质
WO2016201867A1 (zh) 一种m2m车联网的识别方法和装置
CN112100445A (zh) 一种影像信息的处理方法及装置、电子设备和存储介质
CN114170585A (zh) 危险驾驶行为的识别方法、装置、电子设备及存储介质
CN113076851A (zh) 一种车辆违章数据的采集方法、装置以及计算机设备
Hou et al. Early warning system for drivers’ phone usage with deep learning network
CN114913470B (zh) 一种事件检测方法及装置
TWI455072B (zh) 車輛搜尋系統
CN111985304A (zh) 巡防告警方法、系统、终端设备及存储介质
CN113128294A (zh) 一种道路事件取证方法、装置、电子设备及存储介质
CN109803067A (zh) 视频浓缩方法、视频浓缩装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928818

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18928818

Country of ref document: EP

Kind code of ref document: A1