CN113674523A - Traffic accident analysis method, device and equipment - Google Patents

Traffic accident analysis method, device and equipment Download PDF

Info

Publication number
CN113674523A
CN113674523A CN202011056050.3A CN202011056050A CN113674523A CN 113674523 A CN113674523 A CN 113674523A CN 202011056050 A CN202011056050 A CN 202011056050A CN 113674523 A CN113674523 A CN 113674523A
Authority
CN
China
Prior art keywords
accident
traffic
information
video
analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011056050.3A
Other languages
Chinese (zh)
Inventor
王湛
陈亚军
耿长龙
霍毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2021/076689 priority Critical patent/WO2021227586A1/en
Publication of CN113674523A publication Critical patent/CN113674523A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services

Abstract

The application discloses a traffic accident analysis method, which is applied to the field of intelligent traffic, wherein at least one camera is installed in a traffic road for collecting videos of the traffic road, and the method comprises the following steps: the analysis device acquires geographical track information of at least one object on a traffic road, wherein the at least one object on the traffic road comprises vehicles and pedestrians, and the geographical track information of the at least one object can be obtained from a video acquired by a camera; determining an accident object according to the geographical track information of at least one object; acquiring environmental information of the accident object, for example: information of traffic sign lines, surrounding objects; the analysis device further obtains an analysis result of the traffic accident according to the geographical track information of the accident object and the environment information of the accident object. The method automatically completes the discovery and analysis of the traffic accident and improves the efficiency of traffic accident treatment.

Description

Traffic accident analysis method, device and equipment
Technical Field
The present application relates to the field of intelligent traffic (intelligent transportation), and in particular, to a traffic accident analysis method, apparatus, and device.
Background
With the rapid development of cities and the increasing improvement of the living standard of people, the number of motor vehicles is rapidly increased, and the incidence rate of traffic accidents is always high. After a traffic accident occurs, if the traffic accident cannot be timely processed to be dissipated as soon as possible, the traffic state of a traffic road is often greatly affected, so that traffic jam is caused, even a secondary traffic accident is caused, and great hidden danger exists.
For the discovery and treatment of traffic accidents, the current methods mainly include: 1. the concerned person or the enthusiastic crowd gives an alarm, and the traffic police immediately arrive at the scene to judge and decide the responsibility of the traffic accident. 2. The traffic accident handling method for remote manual duty assignment comprises the following steps: and the traffic police checks the pictures or videos of the traffic accident scene uploaded by the accident owner on line, and judges the responsibility of the traffic accident according to the vehicles, traffic road markings, vehicle marks and the like in the pictures or videos. Both of the above methods have a problem of low traffic accident handling efficiency.
How to improve the efficiency of traffic accident treatment is a problem which needs to be solved urgently in the current traffic field.
Disclosure of Invention
The application provides a traffic accident analysis method, a traffic accident analysis device and traffic accident analysis equipment, which are applied to the field of intelligent traffic, and the discovery and analysis of traffic accidents are automatically completed through the analysis device, so that the traffic accident treatment efficiency is improved.
In a first aspect, the present application provides a traffic accident analysis method, in which at least one camera is installed in a traffic road for collecting a video of the traffic road, the method including: the analysis device acquires geographical track information of at least one object on a traffic road, wherein the at least one object of the traffic road comprises a vehicle and a pedestrian; the analysis device determines an accident object according to the geographical track information of at least one object; acquiring environmental information of the accident object, for example: information of traffic sign lines, surrounding objects; the analysis device further obtains an analysis result of the traffic accident according to the geographical track information of the accident object and the environment information of the accident object.
The method automatically acquires the geographical track information of the accident object to determine the accident object, and then analyzes the accident by combining the geographical track information and the environmental information of the accident object to obtain an analysis result. The method can rapidly find and analyze the traffic accidents occurring on the traffic road, and the obtained analysis result can be used for processing the traffic accidents, so that the processing efficiency of the traffic accidents is improved.
In one possible implementation of the first aspect, the analysis result includes accident liability determination information. The accident responsibility judgment information represents the responsibility required to be undertaken by each party of the traffic accident, and the analysis device automatically obtains the accident responsibility judgment information, so that the traffic accident processing speed can be increased, and the police force cost can be saved.
In one possible implementation of the first aspect, the method further comprises: and sending the analysis result to a terminal associated with the accident object. By sending the analysis result to the terminal associated with the accident object, the parties involved in the traffic accident can quickly acquire the relevant information of the traffic accident,
in one possible implementation of the first aspect, the method further comprises: and receiving feedback information sent by a terminal associated with an accident object, wherein the feedback information indicates that disagreement exists or no disagreement exists in the accident responsibility judgment information.
After receiving the feedback information, the analysis device may further determine a next action according to the feedback information, for example: if the feedback information indicates that the accident party does not agree with the responsibility judgment information, the analysis device automatically informs the management platform to start processing the ticket, or informs the insurance company to start the claim settlement process. If the feedback information indicates that the accident party disagrees with the responsibility judgment information, the analysis device can continuously send the analysis result to the management platform, and the management platform makes a further decision.
In a possible implementation of the first aspect, the analysis result includes one or any combination of the following information: accident time information, accident type and description, accident cause, evidence video, and accident detail image, wherein the evidence video is used for showing the cause or process of the traffic accident. The above analysis results can make the accident party or manager more deeply clear the traffic accident.
In one possible implementation of the first aspect, the method further comprises: and sending the analysis result to the management platform. The management personnel can know the traffic accident and make a decision by sending the analysis result to the management platform.
In one possible implementation of the first aspect, the method further comprises: receiving an accident feedback result sent by the management platform; and sending the accident feedback result to a terminal associated with the accident object. The accident feedback result can be the final determination of accident responsibility information or the treatment result of the traffic accident. The analysis device can quickly transmit the accident feedback result, so that the accident handling can be accelerated, and the labor cost of the traffic accident handling can be saved.
In one possible implementation of the first aspect, the analysis results include: a degree of impact of a traffic accident, the degree of impact of the traffic accident comprising: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
In one possible implementation of the first aspect, the method further comprises: and informing a third party of the influence degree of the traffic accident. The third party may be a traffic news platform, a traffic radio station, a management platform, a terminal of an accident object.
In one possible implementation of the first aspect, determining the accident object based on the geographical trajectory information of the at least one object comprises: determining an abnormal track according to the geographical track information of at least one object; and determining the accident object by using the abnormal track and a pre-trained artificial intelligence AI model. The accident object can be identified more accurately by determining the abnormal track and then determining the accident object according to the abnormal track and the AI model.
In one possible implementation of the first aspect, obtaining geographical trajectory information of at least one object on a traffic road comprises: acquiring videos collected by at least one camera arranged on a traffic road; and acquiring the geographical track information of the at least one object according to the video, wherein the geographical track information of each object comprises track information of the object in a road section area shot by at least one camera.
In one possible implementation of the first aspect, before obtaining the geographical trajectory information of the at least one object on the traffic road, the method further comprises: and receiving reported information sent by a terminal, wherein the reported information comprises the geographical position information of the traffic accident and/or the time information of the traffic accident.
In one possible implementation of the first aspect, the acquiring geographic trajectory information of at least one object on a traffic road includes: acquiring a first video in videos collected by at least one camera arranged on the traffic road according to the reported information, wherein the first video corresponds to the traffic accident occurring on the traffic road; and acquiring the geographical track information of at least one object on the traffic road according to the first video. The corresponding first video is determined through the reported information of the people with the great concentration or the accident party, and then the first video is analyzed to obtain the track information of at least one object, so that the computing resource can be saved, and the analysis device can analyze the accident more pertinently.
In one possible implementation of the first aspect, the environmental information of the accident object includes one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
In a second aspect, the present application further provides a traffic accident analysis method, including: acquiring a first video, wherein the first video is acquired by at least one camera arranged on a traffic road, and the first video corresponds to a traffic accident on the traffic road; acquiring an analysis result of the traffic accident according to the first video; and providing the analysis result of the traffic accident. The method obtains the first video of the traffic accident according to the video to analyze the traffic accident and provides an analysis result, so that the analysis result can be quickly obtained, and the method can help the traffic accident treating personnel to quickly treat the accident.
In one possible implementation of the second aspect, the method further comprises: acquiring reported information sent by a terminal, wherein the reported information comprises geographical position information of the occurrence of the traffic accident or time information of the occurrence of the traffic accident, or information of a camera for collecting the first video or an accident object of the traffic accident; the acquiring the first video comprises: and acquiring the first video from the videos acquired by the at least one camera according to the reported information.
In one possible implementation of the second aspect, the analysis result includes accident liability determination information.
In one possible implementation of the second aspect, the providing the analysis result of the traffic accident comprises: and sending the analysis result to a terminal associated with the accident object.
In one possible implementation of the second aspect, the method further comprises: and receiving feedback information sent by a terminal associated with the accident object, wherein the feedback information indicates that disagreement exists or no disagreement exists in the accident responsibility judgment information.
In one possible implementation of the second aspect, the analysis result includes one or any combination of the following information: accident time information, accident type and description, accident cause, evidence video, and accident detail image, wherein the evidence video is used for showing the cause or process of the traffic accident.
In one possible implementation of the second aspect, the providing the analysis result of the traffic accident comprises: and sending the analysis result to a management platform.
In one possible implementation of the second aspect, the method further comprises: receiving an accident feedback result sent by the management platform; and sending the accident feedback result to a terminal associated with the accident object.
In one possible implementation of the second aspect, the analysis result includes: the influence degree of the traffic accident comprises: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
In one possible implementation of the second aspect, the providing the analysis result of the traffic accident comprises: and informing a third party of the influence degree of the traffic accident.
In one possible implementation of the second aspect, the obtaining the analysis result of the traffic accident according to the first video includes: acquiring geographical track information of the accident object according to the first video; acquiring environmental information of the accident object; and obtaining an analysis result of the traffic accident according to the geographical track information of the accident object and the environmental information of the accident object.
In one possible implementation of the second aspect, the environmental information of the accident object includes one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
In a third aspect, the present application provides an analysis device for traffic accidents, the device comprising a data processing module, configured to obtain geographical trajectory information of at least one object on a traffic road; the accident finding module is used for determining an accident object according to the geographical track information of at least one object; the accident analysis module is used for acquiring the environmental information of the accident object; and obtaining an analysis result of the traffic accident according to the geographical track information of the accident object and the environmental information of the accident object.
In one implementation of the third aspect, the analysis result includes accident liability determination information.
In one implementation of the third aspect, the result sending module is configured to send the analysis result to a terminal associated with the accident object.
In an implementation of the third aspect, the result sending module is further configured to receive feedback information sent by a terminal associated with the accident object, where the feedback information indicates that there is an objection or no objection to the accident responsibility determination information.
In one implementation of the third aspect, the analysis result includes one or any combination of the following information: the system comprises time information of accidents, accident types and descriptions, accident causes, evidence videos and accident detail images, wherein the evidence videos are used for showing the causes or processes of the traffic accidents.
In an implementation of the third aspect, the result sending module is further configured to send the analysis result to a management platform.
In an implementation of the third aspect, the result sending module is further configured to receive an accident feedback result sent by the management platform; and sending the accident feedback result to a terminal associated with the accident object.
In one implementation of the third aspect, the analysis results include: the influence degree of the traffic accident comprises: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
In one implementation of the third aspect, the result sending module is further configured to notify a third party of the degree of influence of the traffic accident.
In one implementation of the third aspect, the accident discovery module is configured to determine an abnormal trajectory according to the geographical trajectory information of the at least one object; and determining the accident object by using the abnormal track and a pre-trained artificial intelligence AI model.
In one implementation of the third aspect, the data processing module is configured to acquire a video captured by at least one camera disposed on the traffic road; and acquiring the geographical track information of the at least one object according to the video, wherein the geographical track information of each object comprises track information of the object in a road section area shot by at least one camera.
In an implementation of the third aspect, the data processing module is further configured to: and receiving reported information sent by a terminal, wherein the reported information comprises the geographical position information of the traffic accident and/or the time information of the traffic accident.
In an implementation of the third aspect, the data processing module is configured to: acquiring a first video in videos collected by at least one camera arranged on the traffic road according to the reported information, wherein the first video corresponds to the traffic accident occurring on the traffic road; and acquiring the geographical track information of at least one object on the traffic road according to the first video.
In one implementation of the third aspect, the environmental information of the accident object includes one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
In a fourth aspect, the present application further provides an analysis device comprising: the acquisition module is used for acquiring a first video, the first video is acquired by at least one camera arranged on a traffic road, and the first video corresponds to a traffic accident on the traffic road. And the analysis module is used for acquiring an analysis result of the traffic accident according to the first video. And the providing module is used for providing the analysis result of the traffic accident.
In one implementation of the fourth aspect, the analysis result includes accident responsibility determination information.
In one implementation of the fourth aspect, the providing module is configured to send the analysis result to a terminal associated with an accident object.
In an implementation of the fourth aspect, the providing module is further configured to receive feedback information sent by a terminal associated with an accident object, where the feedback information indicates that there is an objection or no objection to the accident responsibility determination information.
In an implementation of the fourth aspect, the analysis results may include one or more of the following information: accident time information, accident type and description, accident cause, evidence video, and accident detail image, wherein the evidence video is used for showing the cause or process of the traffic accident.
In an implementation of the fourth aspect, the providing module is further configured to send the analysis result to a management platform.
In an implementation of the fourth aspect, the providing module is further configured to receive an accident feedback result sent by the management platform; and sending the accident feedback result to a terminal associated with the accident object.
In one implementation of the fourth aspect, the analysis results include: the influence degree of the traffic accident comprises: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
In one implementation of the fourth aspect, the providing module is further configured to notify a third party of the degree of influence of the traffic accident.
In one implementation of the fourth aspect, the analysis module is configured to determine an abnormal trajectory according to geographical trajectory information of at least one object; and determining the accident object by using the abnormal track and a pre-trained artificial intelligence AI model.
In one implementation of the fourth aspect, the acquiring module is configured to acquire a video captured by at least one camera disposed on the traffic road; and acquiring the geographical track information of the at least one object according to the video, wherein the geographical track information of each object comprises track information of the object in a road section area shot by at least one camera.
In an implementation of the fourth aspect, the obtaining module is further configured to: and receiving reported information sent by a terminal, wherein the reported information comprises the geographical position information of the traffic accident and/or the time information of the traffic accident.
In an implementation of the fourth aspect, the obtaining module is configured to: acquiring a first video in videos collected by at least one camera arranged on the traffic road according to the reported information, wherein the first video corresponds to the traffic accident occurring on the traffic road; and acquiring the geographical track information of at least one object on the traffic road according to the first video.
In one implementation of the fourth aspect, the environmental information of the accident object includes one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
In a fifth aspect, the present application further provides a computing device comprising a processor and a memory, the memory storing computer instructions, the processor executing the computer instructions to cause the computing device to perform the method of the foregoing first aspect or any possible implementation of the first aspect or to perform the method of the foregoing second aspect or any possible implementation of the second aspect.
In a sixth aspect, the present application further provides a computer readable storage medium having stored thereon computer program code which, when executed by a computing device, performs the method of the foregoing first aspect or any possible implementation of the first aspect, or performs the method of the foregoing second aspect or any possible implementation of the second aspect. The computer readable storage medium includes, but is not limited to, volatile memory such as random access memory, non-volatile memory such as flash memory, hard disk (HDD), Solid State Disk (SSD).
In a seventh aspect, the present application further provides a computer program product comprising computer program code which, when executed by a computing device, performs the method provided in the foregoing first aspect or any possible implementation of the first aspect, or performs the method provided in the foregoing second aspect or any possible implementation of the second aspect. The computer program product may be a software installation package, which may be downloaded and executed on a computing device in case it is desired to use the method as provided in the first aspect or any possible implementation of the second aspect.
In an eighth aspect, the present application further provides a system for traffic accident analysis, the system comprising at least one camera and an analysis device disposed on a traffic road; the at least one camera arranged on the traffic route is configured to capture a video and send the video to the analysis device, and the analysis device is configured to execute the method provided in the first aspect or any possible implementation of the first aspect, or the method provided in the second aspect or any possible implementation of the second aspect, according to the captured video.
Drawings
Fig. 1A is a schematic deployment diagram of an analysis apparatus provided in an embodiment of the present application;
fig. 1B is a schematic deployment diagram of another analysis apparatus provided in the embodiments of the present application;
fig. 1C is a schematic deployment diagram of another analysis apparatus provided in the embodiment of the present application;
fig. 2 is a schematic structural diagram of a computing device 100 according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a computing device system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an analysis apparatus 300 according to an embodiment of the present disclosure;
fig. 5 is a schematic flow chart of a traffic accident analysis method according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a display interface for analyzing results according to an embodiment of the present disclosure;
fig. 7 is a schematic flow chart of another traffic accident analysis method according to an embodiment of the present application.
Fig. 8 is a schematic structural diagram of another analysis apparatus 800 according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings attached to the present application.
In order to make the aspects of the embodiments of the present application clearer, an explanation of relevant terms is first made before specifically describing the aspects of the embodiments of the present application.
Traffic accidents: the traffic accident in this application refers to an event that collision or friction occurs between objects on a traffic road or between objects and other static facilities, or an event that an object on a traffic road has a self-vehicle fault, such as: collision or friction between vehicles, between vehicles and pedestrians, between vehicles and roadside facilities, spontaneous combustion of vehicles themselves, vehicle rollover, and the like.
Traffic accident liability assessment: and (3) aiming at the occurred traffic accident, combining the reason of the traffic accident and traffic management rules to carry out responsibility judgment on the party of the traffic accident. The reasons for causing traffic accidents are many, and the responsibility distribution for traffic accident approval is generally obtained by analyzing information in various aspects such as the driving track, the driving speed, the traffic marking line of the traffic road, the traffic indicator light, the traffic road condition and the like of the vehicle. Common traffic accident causes are: the rear vehicle has too high speed and does not keep a safe braking distance with the front vehicle, so that rear-end accidents are caused; the front vehicle changes lanes illegally, so that the rear vehicles of the adjacent lanes cannot react in time to cause collision accidents; the running track of the vehicle is not controlled timely, so that the vehicle collides with the guardrail around the traffic road; the vehicle runs the red light and collides with the pedestrian; the friction between the driver of the engineering truck and the car in the adjacent lane due to the blind area in the visual field.
Traffic roads: the traffic road in the application comprises road sections for vehicles or pedestrians to pass through, intersections and adjacent areas (such as flower beds, fences and the like next to the road sections) of the road sections and the intersections. It should be understood that the traffic road referred to in this application may refer to a road segment, an intersection, or a traffic region composed of several road segments and an intersection and its surrounding environment (for example, all the road segments, intersections and their surrounding environment in shenzhen city dragon sentry region are referred to as a traffic road).
Objects, object in this application refers to objects moving on a traffic road or moving objects that are stationary for a short time, such as: vehicles on traffic roads (automobiles, bicycles, etc.), pedestrians, animals, etc.
Track: the track of the vehicle or the pedestrian in the application includes two types, one is the track of the object in the video, and the other is the track of the object on the traffic road.
Trajectory of an object in a video: the method includes representing the path of an object in a video recorded in the video shot by a camera arranged on or near a traffic road during the motion of the object on the traffic road. The track information corresponding to the track of the object in the video is called the video track information of the object. Each video track information is a pixel coordinate sequence, the pixel coordinate sequence comprises a plurality of pixel coordinates which are arranged according to the time sequence, and each pixel coordinate represents the coordinate value of a pixel point of an object in a video frame image of a video. The pixel coordinates are two-dimensional coordinates, and the pixel coordinates represent the positions of the pixel points in the image.
Trajectory of the object on the traffic road: i.e. the path formed by the object during its movement on the traffic road. And the track information corresponding to the track of the object on the traffic road is called the geographical track information of the object. Each geographical track information is a geographical coordinate sequence, the geographical coordinate sequence comprises a plurality of geographical coordinates arranged according to a time sequence, and each geographical coordinate represents a geographical coordinate value of a point of the object on the traffic road. It should be noted that the geographic coordinates of the object may be coordinate values in any coordinate system in the physical world, such as: in the present application, the geographic coordinates of the object are represented by three-dimensional coordinate values consisting of longitude, latitude, and altitude corresponding to the position of the object in the traffic road. In other embodiments, the geographic coordinates of the object may also be represented by coordinate values in a natural coordinate system.
It is to be understood that the trajectory of an object on a traffic road may be recorded by a plurality of cameras disposed on the traffic road, and thus, the trajectory of the object on the traffic road may be obtained from the trajectories of the object in videos taken by the plurality of cameras.
Due to the rapid development of urban economy, traffic roads are designed to be more and more complex, and more private cars and engineering cars are arranged on the traffic roads. The problems of traffic jam and frequent traffic accidents are caused, and the urban disease is serious. The method can quickly analyze and process the traffic accidents which have already occurred, and has great effects on relieving traffic jam and avoiding secondary traffic accidents.
With the rapid development of the Artificial Intelligence (AI) technology, the AI technology is applied to the traffic field to solve the actual problems in the traffic field, which becomes a hotspot in academia and industry, and is also the core concept of creating smart cities.
The method determines the track of an object on a traffic road by combining videos shot by a plurality of cameras, discovers traffic accidents according to geographical track information of the object, further analyzes the traffic accidents and obtains an analysis result.
The traffic accident analysis method provided by the application can be executed by an analysis device. The functions of the analysis device may be implemented by a software system, a hardware device, or a combination of a software system and a hardware device.
The analysis device is flexible to deploy, and can be deployed in a marginal environment, such as: the analysis means may be an edge computing device in an edge environment or software means running on one or more edge computing devices. The edge environment refers to a data center or a collection of edge computing devices that are closer to a traffic area (or a traffic road) to be subjected to traffic accident analysis, and includes one or more edge computing devices, which may be road side devices with computing capabilities disposed at the roadside of the traffic road.
For example: as shown in fig. 1A, the analysis apparatus is deployed at a position close to an intersection in which a camera a and a camera B that can be networked are disposed, i.e., at the edge computing device on the roadside. The camera A shoots a video A of passing vehicles on the left side of the intersection, and the video A is sent to an analysis device in the edge environment through a network. The camera B shoots a video B of passing vehicles at the right side of the intersection, and the video B is sent to an analysis device in the edge environment through a network, and the analysis device can find and analyze traffic accidents occurring on the traffic road according to the received video A and the video B and videos shot by other cameras which are not shown in the figure 1A. The analysis device can send the analysis result to the management platform, so that the traffic police of the management platform can perform further processing and law enforcement according to the analysis result. Or, optionally, the analysis device may further send the analysis result to a vehicle or a terminal device of the party who has the traffic accident, so that the party can know the analysis result of the traffic accident.
The analysis device may also be deployed in a cloud environment, which is an entity that provides cloud services to users using the underlying resources in a cloud computing mode. A cloud environment includes a cloud data center that includes a large number of infrastructure resources (including computing resources, storage resources, and network resources) owned by a cloud service provider, which may include a large number of computing devices (e.g., servers), and a cloud service platform. The analysis device can be a server used for carrying out traffic accident analysis in the cloud data center; the analysis device can also be a virtual machine created in the cloud data center for traffic accident analysis; the analysis device may also be a software device deployed on a server or a virtual machine in the cloud data center, the software device is used for discovering and analyzing the traffic accident, and the software device may be deployed on a plurality of servers in a distributed manner, or deployed on a plurality of virtual machines in a distributed manner, or deployed on the virtual machine and the server in a distributed manner.
For example: as shown in fig. 1B, the analysis device is deployed in a cloud environment, and the network-connectable camera a and camera B provided on the traffic path side shown in the figure, and the camera not shown in the figure can transmit the captured video to the analysis device in the cloud environment. The analysis device can find and analyze the traffic accidents on the traffic road according to the received video. The analysis device can send the analysis result to the management platform, so that the traffic police of the management platform can perform further processing and law enforcement according to the analysis result. Or, optionally, the analysis device may further send the analysis result to a vehicle or a terminal device of the party who has the traffic accident, so that the party can know the analysis result of the traffic accident.
The analysis device can be deployed in a cloud data center by a cloud service provider, the cloud service provider abstracts functions provided by the analysis device into a cloud service, and the cloud service platform is used for users to consult and purchase the cloud service. After purchasing the cloud service, the user can use the traffic accident analysis service provided by the analysis device of the cloud data center. The analysis device can also be deployed in a computing resource (for example, a virtual machine) of a cloud data center rented by a tenant, the tenant purchases a computing resource cloud service provided by a cloud service provider through a cloud service platform, and the analysis device is operated in the purchased computing resource, so that the analysis device executes the method for analyzing the traffic accident. It should be understood that the functions provided by the analysis device can also be abstracted into a cloud service together with the functions provided by other functional devices, such as: the cloud service provider abstracts the traffic accident analysis function provided by the analysis device and the real-time flow calculation and monitoring function provided by the flow calculation device into a traffic state management cloud service, and after a user purchases the traffic state management cloud service, the service for analyzing the traffic accident of the traffic road and the service for monitoring the traffic flow in real time can be obtained through the analysis device.
When the analysis means is a software means, the analysis means may be logically divided into a plurality of parts each having a different function (the plurality of parts for example: the analysis means comprises a data processing module, an accident discovery module, an accident analysis module, a result transmission module). The analysis device can be deployed in different environments or equipment, and the analysis function of the traffic accident can be realized by cooperation among the parts of the analysis device deployed in different environments or equipment. For example: as shown in fig. 1C, the data processing module in the analysis apparatus is deployed on the edge computing device, and the accident discovery module, the accident analysis module, and the result sending module are deployed in the cloud data center (for example, deployed on a server or a virtual machine of the cloud data center). A plurality of cameras disposed on the traffic road each transmit captured video to a data processing module disposed in the edge computing device. The data processing module processes each video, detects and tracks objects such as vehicles, pedestrians and the like recorded in the video, converts the obtained track of the object in the video, and obtains the track of the object on a traffic road, and the data processing module can also obtain attribute information, speed information and the like of the object. The data processing module sends the obtained information of the object to the cloud data center, the accident discovery module, the accident analysis module and the result sending module which are deployed on the cloud data center further conduct discovery and analysis of the traffic accident according to the information of the object, and the obtained analysis result is sent to the management platform or displayed on an interface. It should be understood that the present application does not limit the partitioning of the various parts of the analysis device, nor does it limit which parts of the analysis device are specifically deployed in which environment. In actual application, the system can be adaptively deployed according to the computing capability of each computing device or the specific application requirement. It is noted that in some embodiments, the camera may be a smart camera with certain computing capabilities, and the analysis device may receive structured data that is processed by the smart camera instead of video. The analysis device can also be deployed in three parts, wherein one part is deployed in the smart camera, the other part is deployed in the edge computing device, and the other part is deployed in the cloud computing device.
When the analysis device is a software device, the analysis device can be deployed on one computing device in any environment (cloud environment, edge environment) or on a terminal computing device (for example, a smart phone or a smart tablet); when the analysis means is a hardware device, the analysis means may be a computing device or a terminal computing device in any environment. Fig. 2 provides a schematic structural diagram of a computing device 100, and the computing device 100 shown in fig. 2 may be a computing device in any environment or a terminal computing device. Computing device 100 includes memory 101, processor 102, communication interface 103, and bus 104. The memory 101, the processor 102 and the communication interface 103 are connected to each other through a bus 104.
The Memory 101 may be a Read Only Memory (ROM), a static Memory device, a dynamic Memory device, or a Random Access Memory (RAM). The memory 101 may store computer instructions that, when executed by the processor 102 stored in the memory 101, the processor 102 and the communication interface 103 are used to perform a method of traffic accident analysis. The memory may also store data such as: a portion of the memory 101 is used to store data required for traffic accident analysis and to store intermediate data or result data during program execution.
The processor 102 may be a general-purpose Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Graphics Processing Unit (GPU), or any combination thereof. The processor 102 may include one or more chips, and the processor 102 may include an AI accelerator, such as: a neural Network Processor (NPU).
The communication interface 103 enables communication between the computing device 100 and other devices or communication networks using transceiver modules such as, but not limited to, transceivers. For example, data required for traffic accident analysis may be acquired through the communication interface 103.
Bus 104 may include a path that transfers information between components of computing device 100 (e.g., memory 101, processor 102, communication interface 103).
As can be seen from the foregoing, the analytics means may be distributively deployed across multiple computing devices in different environments or in the same environment. Fig. 3 also provides a schematic structural diagram of a computing device system, in which a plurality of computing devices 200 can cooperatively realize the functions of the analysis apparatus by executing computer instructions through a processor.
As shown in fig. 3, each computing device 200 includes a memory 201, a processor 202, a communication interface 203, and a bus 204. The memory 201, the processor 202 and the communication interface 203 are connected to each other through a bus 204.
The memory 201 may be a ROM, a static storage device, a dynamic storage device, or a RAM. The memory 201 may store computer instructions that, when executed by the processor 202, the processor 202 and the communication interface 203 are used to perform portions of the method of traffic accident analysis, stored in the memory 201. The memory may also store data such as: a portion of the memory 201 is used to store data needed for traffic accident analysis, as well as intermediate or resulting data during execution of computer instructions.
The processor 202 may employ a general purpose CPU, ASIC, GPU or any combination thereof. The processor 102 may include one or more chips, and the processor 102 may include an AI accelerator, such as: NPU.
The communication interface 203 enables communication between the computing device 200 and other devices or communication networks using transceiver modules such as, but not limited to, transceivers. For example, video or radar data required for traffic accident analysis may be acquired through the communication interface 203.
Bus 204 may include a pathway to transfer information between various components of computing device 200 (e.g., memory 201, processor 202, communication interface 203).
A communication path is established between each of the above-mentioned computing devices 200 through a communication network. Each computing device 200 runs a portion of the analytics means (e.g., one or more of a data processing module, an incident discovery module, an incident analysis module, a result transmission module in the analytics means). Any of the computing devices 200 may be a server in a cloud data center, or a computing device in an edge data center, or a terminal computing device.
It should be understood that when the functions of the analysis apparatus are implemented by the above-described computing device system, each computing device 200 in the computing device system may be a computing device of the same structure or model, for example, as shown in fig. 3. In other embodiments, the multiple computing devices in the computing device system may also be computing devices of different structures or models, such as: the computing device for data processing in the computing device system is an 8-core server with strong computing power, the computing device for accident discovery and analysis is a 4-core server, and the computing device for result transmission is a computing device with a display interface.
Fig. 4 depicts a schematic structural diagram of an analysis apparatus 300 according to an embodiment of the present application. It should be understood that fig. 4 shows only an exemplary division of the structure of the analysis apparatus according to the present application, and the present application does not limit the specific division of the structure of the analysis apparatus.
The functions of the various modules of the analysis apparatus 300 are exemplarily described below with reference to fig. 4. It should be understood that the functions of the modules of the analysis apparatus 300 described below are only functions that the analysis apparatus may have in some embodiments of the present application, and are not limited to the functions that the analysis apparatus has.
As shown in fig. 4, the analysis apparatus 300 includes a data processing module 301, an accident discovery module 302, an accident analysis module 303, and may further include a result transmission module 304 and/or a result display module 305.
The data processing module 301 is configured to obtain geographical trajectory information of at least one object on a traffic road. Specifically, the data processing module 301 is configured to receive raw data collected by a raw data collecting device, where the raw data collecting device may include various types of cameras, laser radars, infrared radars, and the like, which are disposed near a traffic road and used for shooting traffic conditions of the traffic road. The raw data received by the data processing module 301 may include video captured by a plurality of cameras, lidar data, infrared radar data, and the like. The data processing module 301 detects and tracks an object recorded in a received video, obtains a trajectory of the object in one or more videos within a period of time, and further obtains geographical trajectory information of the object according to the trajectory of the object in the one or more videos. The data processing module 301 is further configured to send the geographical trajectory information of the at least one object to the incident discovery module 302.
Optionally, the data processing module 301 may perform more accurate positioning on the position of the object on the traffic road by combining radar data (e.g., laser radar data and infrared radar data) in the process of detecting and tracking the object recorded in the received video and obtaining geographical track information of the object according to the track of the object in one or more videos, so that the obtained geographical track information of the object is more accurate.
Optionally, the data processing module 301 is further configured to determine speeds of the detected object at multiple moments on the traffic road, and send speed information of the object to the accident discovery module 302 or the accident analysis module 303.
Optionally, the data processing module 301 is further configured to determine accelerations of the detected object at multiple moments on the traffic road, and send acceleration information of the object to the accident discovery module 302 or the accident analysis module 303.
Optionally, the data processing module 301 is further configured to determine the detected postures of the object at multiple moments on the traffic road, and send posture information of the object to the accident discovery module 302 or the accident analysis module 303.
Optionally, the data processing module 301 is further configured to receive report information sent by a terminal, where the terminal may be a terminal associated with an accident object, for example: the terminal of the party who has the traffic accident or the vehicle-mounted computer, and the terminal can also be a terminal of a person who finds the traffic accident (for example, terminals of pedestrians and residents near the place where the traffic accident occurs), and the reported information can include: the method comprises the steps of obtaining geographical position information of a traffic accident or time information of the traffic accident or information of a camera collecting the first video or an accident object of the traffic accident. The data processing module 301 may send the report information to the accident discovery module 302 or to the accident analysis module 303.
The accident finding module 302 is configured to find a track of an abnormal vehicle according to a track of an object on a traffic road, further determine whether a traffic accident occurs in the abnormal vehicle, determine the traffic accident, and determine an accident object where the traffic accident occurs. The accident discovery module 302 is further configured to send information of the accident type and the accident object determined as the occurrence of the traffic accident to the accident analysis module 303.
The accident analysis module 303 is configured to perform traffic accident analysis according to the accident object determined by the accident discovery module 302 as the traffic accident, the vehicles around the accident object, the traffic sign line, and other scene conditions, and obtain an analysis result. The analysis results may include the cause of the traffic accident.
Optionally, the accident analysis module 303 is further configured to intercept video content of an accident according to the analysis result and/or the geographical trajectory information of the accident object, and use the video content in which the accident content and the accident reason are recorded as an evidence video.
Optionally, the accident analysis module 303 is further configured to calculate an influence degree of the traffic accident by combining factors such as time, place, traffic flow at the accident place, accident scale, and driving trajectory of surrounding vehicles of the accident. The influence degree of the traffic accident is used to indicate the influence degree of the traffic accident on the whole traffic environment (such as surrounding traffic vehicles, traffic facilities and the like), the accident object and the accident party. The degree of influence of a traffic accident may be expressed in terms of influence values, and there may be various ways of calculating the influence values.
In some embodiments, the analysis apparatus 300 may further include a result transmission module 304.
A result sending module 304, configured to send the information of the object where the traffic accident occurs, obtained by the accident discovery module 302, and/or the analysis result of the traffic accident, obtained by the accident analysis module 302, to the management platform.
Optionally, the result sending module 304 is further configured to send the information of the vehicle in which the traffic accident occurs, which is obtained by the accident discovery module 302, and the analysis result of the traffic accident, which is obtained by the accident analysis module 302, to the terminal associated with the accident object.
Optionally, the result sending module 304 is further configured to obtain an accident feedback result for the traffic accident according to the information sent by the result sending module 304 by the traffic manager. The result sending module 304 is further configured to send the accident feedback result made by the traffic manager to the terminal associated with the accident object, for example: the vehicle or terminal of the accident party.
In still other embodiments, the analysis apparatus 300 may further include a result display module 305, such as: the analysis device may be an electronic device having a display screen, and the analysis device may present a Graphical User Interface (GUI) to a user through the result display module 305 and the display screen, and present information such as occurrence of a traffic accident and an analysis result of the traffic accident through the GUI. It should be understood that the users in the present application may include: a traffic manager, accident party, user of the analysis device or traffic manager, etc.
An accident analysis method provided by the embodiment of the present application is described below with reference to fig. 5.
S501, geographical track information of at least one object on the traffic road is obtained.
The geographical trajectory information of each object of the at least one object represents the geographical position of each object on the traffic road at each moment in a time period, wherein the geographical position can be represented by geographical coordinates.
Since objects (e.g., motor vehicles, non-motor vehicles, pedestrians) traveling on the traffic road during a certain period of time can be captured by the plurality of cameras disposed on the traffic road, the geographical trajectory information of at least one object on the traffic road can be determined from the plurality of received videos. Optionally, radar information sensed by radar equipment (such as laser radar, infrared radar, millimeter wave radar and the like) arranged on the traffic road can be received, and the geographical track information of the multiple objects on the traffic road can be determined by combining the video and the radar information. The camera and the radar device can be collectively called as a raw data acquisition device.
In some embodiments, the obtaining of the geographical trajectory information of the at least one object on the traffic road in this step may specifically include:
step1, receiving videos taken by one or more cameras disposed on the traffic road, each video recording traffic status for a different area (or different perspective) of the traffic road.
Step2, object detection is performed from the video. Specifically, object detection is performed on a video frame in each video, and video position and type information of the object are obtained (where the video position of the object is the pixel coordinates of the object in the video frame). Optionally, object attribute detection may be further performed on the detected object to obtain attribute information of the object. Since the object detection obtains the type information of the object in the video frame, the attributes detected by the object attribute detection may be different according to different types of the object attributes, for example: for the type of detected object being a motor vehicle, the properties of the motor vehicle to be detected comprise: the type of the detected object is a pedestrian, and the attributes of the detected pedestrian include: gender, color of clothing, shape, etc.
It should be understood that the above method for detecting an object in a video frame may use a trained neural network model with an object detection function (e.g., Faster local convolutional neural network (fast R-CNN), single-gun multi-box detector (SSD), referred to as an object detection model), and the method inputs the video frame into the object detection model, and the object detection model performs feature extraction on the video frame, performs regression according to the extracted features to obtain a video position of the object, performs classification according to the extracted features to obtain a type of the object, and outputs type information and a video position of one or more detected objects in the video frame.
It should be understood that, the method for detecting the object attribute of the video frame may adopt a trained neural network model with an object attribute detection function, which is called an object attribute detection model, and may input a sub-image (the sub-image includes an object to be subjected to attribute detection, and the sub-image may be obtained by cutting out from the video frame) in the video frame corresponding to the detected object into the object attribute detection model, and the object attribute detection model performs attribute detection and outputs attribute information of the object.
It should be understood that the method of performing object detection (and attribute detection) of this Step on a plurality of video frames in one or more videos received in Step1 described above can obtain video position and type information of at least one object recorded in a plurality of video frames in each video. Optionally, attribute information of at least one object recorded in a plurality of video frames in each video may also be obtained.
And Step3, carrying out object tracking on the same object in the same video to obtain the video track information of the object.
Because the video positions of the objects in the multiple video frames in each video are obtained in the foregoing steps, and different video frames in the same video can record the video positions of the same object at different times, the object needs to be tracked, which specifically may be: the same object recorded in the video frame of one moment and the video frame of the moment in one video is determined according to the video position of the object, the two objects correspond to the same object ID, and the pixel coordinates of the object ID in the video frames of different moments are recorded on a tracking list. Optionally, when the object tracking is performed, the type and the attribute of the object in the video frame obtained in the previous step may be compared with the type and the attribute of the object in the video frame at the previous time to determine the association between the objects in the two adjacent video frames, and the two objects with the strongest association degree in the two adjacent video frames are marked as the same object ID. The pixel coordinates of the same object ID recorded in the tracking list at each time may form a pixel coordinate sequence, which is video track information of an object in a video. The present application does not limit the specific manner of object tracking.
It should be appreciated that this step may obtain video track information for at least one object in each video.
And Step4, determining the geographical track information of the object according to the video track information of the object.
Since the video track information of each object includes a plurality of pixel coordinates, the object can be spatially positioned according to each pixel coordinate in the video track information. The method comprises the steps of carrying out space positioning on an object, namely determining the geographic coordinates of the object on a traffic road according to the pixel coordinates of the object and the space calibration relation of a camera. The spatial calibration relationship of each camera may be specifically represented as a mapping relationship between pixel coordinates of a point in a video frame captured by each camera and geographic coordinates of the point on the traffic road.
It should be understood that, since the setting position, the shooting angle, the height, and the like of each camera are different, the spatial calibration relationship between the video shot by each camera and the traffic road to be shot is also different. The spatial calibration relationship between the video shot by each camera and the traffic road can be calculated in advance by an analysis device or other equipment, and the specific calculation method of the spatial calibration relationship is not limited in the present application, for example: the geographic coordinates of a plurality of control points on the traffic road can be measured in advance, the pixel coordinates of the control points in a video frame shot by the camera are further determined, the geographic coordinates of the control points and the corresponding pixel coordinates are used for calculating a homography transformation matrix, and the calculated homography transformation matrix is determined as a space calibration relation of the camera.
The video track information of the plurality of objects can be converted into the partial geographical track information of the plurality of objects by the method.
Since a moving object can be captured by a plurality of cameras, each camera may capture the object's motion during a portion of the time. Partial geographical track information of the same object corresponding to different videos can be spliced to obtain more complete geographical track information of the same object in a longer time period. The specific splicing method can be as follows: analyzing partial geographical track information of objects in different videos, and if one or more groups of data which are close or at the same time and have the same or similar geographical coordinates exist in the partial geographical track information of the objects in different videos, further, if types and attributes of the objects corresponding to the two pieces of geographical track information are also the same or similar, determining that the two pieces of partial geographical track information of the same object are the same. The two parts of geographical track information can be spliced to obtain the spliced geographical track information, and during splicing, the data after the partial splicing can be obtained by taking an average value for the overlapped data. Similarly, according to the analysis method, the multiple pieces of partial geographical track information of the same object can be spliced, and finally, the geographical track information of the same object in a time period is obtained.
It should be understood that the geographical track information of a plurality of objects can be obtained according to the above method, and the geographical track information of each object can be obtained by coordinate transformation and splicing of a plurality of video track information, or can be obtained by coordinate transformation of only video track information formed by objects in a video captured by one camera.
Optionally, in S501 of the present application, information such as a velocity, an acceleration, a posture of each object at each time may be determined according to geographical trajectory information of each object in a plurality of objects, where the posture of the object indicates a motion direction of the object or an orientation of the object. The velocity of each object at a time instant may be determined by the distance difference and time difference between the object at the time instant and the previous time instant. The acceleration of each object at a time may be determined by the difference between the speed of the object at that time and the speed of the previous time and the time difference. The pose of each object at a time may be determined by the tangential direction of the object's geographic trajectory line at that point at that time, formed by fitting the object's geographic trajectory information. The calculated information of the speed, the acceleration, the attitude and the like of each object at each moment can also be recorded in the same table (or matrix) with the geographical track information of the object.
Optionally, in another embodiment, before performing S501, the analysis device may further receive report information sent by the terminal, where the report information may include time information of a traffic accident or geographic location information of the traffic accident, and the analysis device calls the corresponding one or more videos according to the report information, and further performs the operation of obtaining geographic track information of at least one object in S501 according to the called one or more videos.
S502, determining a traffic accident and an accident object according to the geographical track information of at least one object, wherein the accident object is an object of the at least one object, which has a traffic accident.
The determining of the accident object through the geographical track information may be divided into two steps:
firstly, the abnormal object with abnormal geographical track information is determined by analyzing the geographical track information of each object and/or comparing the geographical track information of the object with the tracks of other objects collected historically on the same road.
The anomalous object can be determined from a number of aspects, such as: 1. analyzing the geographical track information of each object, fitting the geographical track information of the objects to obtain geographical track lines of the objects, and determining abnormal objects by combining road environment information (such as traffic marking lines and traffic indication information) of the road sections where the objects are located and the geographical track lines of the objects, for example: for an object whose geographical trajectory intersects multiple lanes, touches a lane boundary line, or is confused to be determined as an abnormal object, it is common that such an object may be a collision or a serious violation. The road environment information of the road section where the object is located may be obtained by detection and analysis by the analysis device, or may be obtained by the analysis device from other devices or apparatuses. 2. Obtaining the speed and the acceleration of the object at a plurality of moments according to the geographical track information of the object, and determining the object with abnormal speed or acceleration as an abnormal object, for example: an object whose abrupt change in speed occurring at successive times exceeds a certain threshold is determined as an abnormal object, which may be due to a crash occurring. 3. And comparing the geographic trajectory formed by fitting the geographic trajectory information of the object with the geographic trajectory of the conventional object at the historical moment on the road, and determining the object corresponding to the geographic trajectory with larger comparison difference as an abnormal object. Geographical trajectory information for conventional objects at historical times on the road may be stored in a database.
And secondly, determining accident objects in the abnormal objects according to the abnormal geographical track information of the abnormal objects and the geographical tracks and other information of other objects which are related to the abnormal objects in space-time. The information of other objects related to the abnormal object in space-time can be searched in the database according to the geographic coordinates corresponding to each time in the geographic track information of the abnormal object, and other vehicles passing through the road corresponding to the geographic coordinates at the time can be searched. It should be understood that the accident object of a traffic accident may be one (e.g., a vehicle rolling over) or multiple (e.g., a collision between multiple vehicles).
The specific method for determining the accident object may be various, for example:
1. and determining the accident object by a mechanism modeling mode. For mechanism modeling, a judgment model is mainly established according to the specific characteristics of speed, geographical position, attitude and the like of each moment corresponding to the geographical track information of the abnormal object and the behavior of vehicles around the abnormal object. For example: the following mechanism model was constructed:
Figure BDA0002710904090000151
Figure BDA0002710904090000152
Figure BDA0002710904090000153
Figure BDA0002710904090000154
Figure BDA0002710904090000155
in the above formula, VI represents the instantaneous speed change, PI represents the instantaneous position change, OI represents the instantaneous attitude angle change, RI represents whether the vehicle detours, and ACCIDENT represents whether the vehicle is an ACCIDENT object. In the formula, a binary value is determined by respectively comparing the relation between the instantaneous speed change, the instantaneous position change and the instantaneous attitude angle change of the abnormal vehicle and the preset threshold values a, b and c, and then whether the abnormal vehicle is an accident object is determined by combining the relation between the sum of the binary values and the preset threshold value threshold. When ACCIDENT is 1, the abnormal object is indicated as an ACCIDENT object; when ACCIDENT is 0, it indicates that the abnormal object is a non-ACCIDENT object. It should be understood that the above is only a simple example, and in other embodiments, more parameters may be used for constructing the mechanism model for determining whether the accident object is detected, and different weights may be set for different parameters.
2. The accident object is determined by an Artificial Intelligence (AI) model. Determining the accident object according to the abnormal object may also adopt a pre-trained AI model, for example: logistic gaussian regression models, Support Vector Machines (SVMs), random forest models, deep learning models, and the like. The AI model is trained in advance by using the characteristics such as historical track coordinates, speed change, acceleration change, attitude angle change and the like extracted by known accident objects in which traffic accidents occur. The trained AI model can have the capability of judging whether the abnormal object is the accident object. The process of judging the accident object by utilizing the trained AI model comprises the following steps: inputting the characteristics of the track coordinate, the speed change, the acceleration change, the attitude angle change and the like of the abnormal object to the trained AI model, and obtaining an inference result through inference, wherein the inference result indicates that the abnormal object is an accident object or a non-accident object.
It should be understood that the above are merely examples of two methods of determining an incident object and that the present application is not limited to the specific manner in which an incident object is determined.
Through the above process of determining the accident object, a traffic accident can also be determined, for example: the type of traffic accident is determined. For example: for a determination that there are two accident objects, the distance between the trajectory of the accident objects including the coincident point or the trajectory end point of the accident object is less than a distance threshold, the traffic accident that occurs may be determined to be a collision between the objects.
S503, acquiring environmental information of an accident object, and analyzing the traffic accident according to the geographical track information of the accident object and the environmental information of the accident object to obtain an analysis result.
After the traffic accident and the accident object are determined in S502, the environmental information associated with the accident object may be searched according to the geographical track information of the accident object. The environment information includes: information of surrounding objects that are spatiotemporally correlated with the geographical trajectory information of the accident object, and/or surrounding road environment information that is correlated with the geographical trajectory information of the accident object, such as: the information of the peripheral object includes: the type, trajectory information, speed, acceleration, etc. of other objects surrounding the geographic trajectory of the incident object; the surrounding road environment information includes: traffic sign lines, traffic lights, traffic command boards and the like around the geographical track of the accident object.
The cause of the traffic accident can be further judged according to the searched environmental information of the accident object. The cause of a traffic accident refers to the root cause of the occurrence of the traffic accident, such as: the cause of a traffic accident caused by rear-end collision of two vehicles may be caused by excessive acceleration of a rear vehicle within a certain period of time, and may also be caused by emergency stop or backward movement of a front vehicle; for a traffic accident of a multi-vehicle collision, the cause of the traffic accident may be caused by a lane change of one of the accident objects into an adjacent lane. The causes of traffic accidents are various, and need to be judged by combining various information, and one or more causes of traffic accidents can be provided. The method for judging the cause of the traffic accident is not limited to one, and the judgment can be carried out according to the rules in the preset rule base; the AI model may be used to classify accident characteristics of historical traffic accidents and use the classified accident characteristics as characteristics for training an initial AI model, and the trained AI model (also referred to as an accident cause analysis model) may be used for cause analysis of traffic accidents, and specifically may be: inputting the newly generated characteristic information of the traffic accident into the accident cause analysis model, and further performing characteristic extraction, analysis and reasoning by the accident cause analysis model to obtain the cause of the accident.
The preset rule base can contain a plurality of rules for analyzing the causes of the traffic accidents according to historical traffic accident analysis, and when the traffic accidents belong to a certain type of accidents, the rules about the type of the traffic accidents in the rule base can be combined with the geographical track information of the accident objects of the traffic accidents and the environmental information related to the accident objects to judge so as to determine the causes of the traffic accidents. For example: the turning collision event between vehicles at the same time can include a plurality of rules with different priorities, wherein the rule 1 is to judge whether illegal turning exists in the geographic track of the accident object, and if the illegal turning exists, the turning collision is caused by illegal turning of the accident object. If the determination of rule 1 is no, rule 2 is further determined, and if so, the cause of the turning collision is the overspeed traveling of the accident object. When the judgment of the rule 2 is no, further judging a rule 3, wherein the rule 3 is to judge whether the accident object has a behavior of being stationary or slowly moving at the intersection, if so, the cause of the turning collision is the stationary or slowly moving of the accident object, and the rules can be more than one, which is not described herein again. The rules for each type of traffic accident may be dynamically accumulated and updated, which may ensure that the causes of the traffic accidents obtained by the rules are more accurate.
Optionally, after the traffic accident is analyzed by the method, an evidence video can be obtained, wherein the evidence video is used for showing the cause and the occurrence process of the traffic accident. The evidence video can be obtained by intercepting videos shot by a plurality of cameras on the traffic road and splicing the videos, namely the evidence video is from one or more cameras shooting the traffic road. The content of the evidence video may include video content between a point in time when the cause of the traffic accident occurs until a point in time when the traffic accident occurs. The finishing of the evidence videos also includes but is not limited to splicing videos of related historical tracks on a time scale, showing causality of accident occurrence, and working on slow playing of important videos, vehicle marking and the like. A complete evidence chain can be formed by analyzing and intercepting the evidence videos, and a basis is provided for the traffic manager to determine the responsibility of the traffic accident.
Optionally, after obtaining the cause of the traffic accident, the analysis device may further combine traffic laws and regulations of various countries, such as: the road traffic safety law of the people's republic of China and the like propose or plan for determining the responsibility of traffic accidents. For example: the type of the traffic accident is two front and rear-end collisions, the accident is caused by the fact that the rear vehicle accelerates to collide with the front vehicle, and the rear vehicle shall take all responsibilities according to the regulations of traffic laws. And collision accidents caused by illegal backing behaviors. Can be based on the regulation in the road traffic safety law of the people's republic of China: the front vehicle takes all responsibility of the accident in the rear-end traffic accident formed by the backing or sliding of the front vehicle and the collision of the rear vehicle. Therefore, the analysis device can be combined with the regulation to obtain a liability-determining suggestion or scheme that the accident object with the illegal backing car takes all responsibility.
Optionally, analyzing the traffic accident further includes analyzing a degree of influence of the traffic accident. The degree of influence of a traffic accident includes: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object. Analysis of the extent of impact of a traffic accident may be combined with an analysis of one or more factors, such as: the analysis can be performed according to the position information of the accident vehicle (such as the information of the lane where the accident occurs, the number of lanes occupied by the accident and the like), the accident time (used for judging whether the accident is the peak traffic time), whether the accident relates to pedestrians (which can be obtained by the type of the accident object or by identifying the pedestrians in the evidence video), the track of the vehicle around the accident, the collision strength of the accident vehicle (which can be evaluated by the running speed of the accident vehicle) and other factors. For example, the degree of influence of a traffic accident may be represented by an influence value, and the calculation method of the influence value of a traffic accident may be evaluated by adding weights of different factors, where the calculation formula of the influence value is as follows:
Score=w1*Vtime+w2*Vped+w3*Vdam+w4*Vlane+w5*Vcars
wherein, w1、w2、w3、w4、w5As a weight value, VtimeFor scoring corresponding to the time of occurrence of an accident, VpedScoring relating to different numbers of pedestrians and non-motor vehicles for an accident, VdamFor scoring of crash intensity of an accident according to trajectory assessment (mainly by instantaneous velocity), VlaneGrading the number and position of lanes occupied by an accident vehicle, VcarsAnd (4) corresponding scores are given to accident detouring vehicles. The scores can be obtained by comparing the corresponding conditions with a score list preset by the traffic police, and can also be obtained by evaluating through a mathematical model, and the application does not limit a specific scoring mode.
In some embodiments, the calculated influence value may be dynamically changed according to an unprocessed time after the accident occurs, and the longer the unprocessed time is, the larger the influence value is.
A comprehensive influence value can be obtained through the formula. In some embodiments, a higher impact value indicates a greater degree of impact of the traffic accident on the accident object and/or the surrounding environment of the location where the traffic accident occurred, for example: the influence value can be divided into a plurality of intervals according to the height, when the influence value belongs to a high interval, the priority of traffic police treatment is represented to be highest, the traffic police should send police force personnel to carry out field treatment immediately, otherwise, serious peripheral environment influence or casualty influence can be caused; when the influence value belongs to the middle interval, the priority level of traffic police treatment is high, the traffic police is required to be timely contacted with a person corresponding to an accident object, the accident process is further known, and online or offline responsibility determination is carried out; when the influence value belongs to a low interval, the priority of traffic police treatment is low, and the traffic police can be contacted with the personnel corresponding to the accident object under the condition of loose time so as to perform accident liability assessment or safety education.
Optionally, the analysis device may further report the obtained influence degree of the traffic accident to a device of a third party, where the third party may be a management platform, a device associated with an accident object, a traffic news platform, a traffic radio station, and the like.
Optionally, in some embodiments, after the analyzing apparatus performs the method, the analyzing apparatus may further perform the following steps:
s504, sending the analysis result to the management platform
Through the analysis of the traffic accident through the above steps S501-S503, the obtained analysis result may include one or more of the following information: the cause of the traffic accident, the evidence video, accident liability assessment information (e.g., liability advice or scheme), and the impact value of the traffic accident. The analysis device may send the analysis result to the management platform via the network. Optionally, the analysis device may further send the analysis result together with intermediate information in the analysis process to the management platform, so that the management platform can make reference and decision. For example: the intermediate data may include the type of traffic accident, the speed, acceleration, attitude of the accident object, geographical trajectory information of the accident object and surrounding objects, and the like.
The analysis device may further receive an accident feedback result sent by the management platform, where the accident feedback result may be: the result of the process of determining responsibility for the accident, or the processing scheme of the accident, etc. The analysis device may send the accident feedback result to a terminal corresponding to the accident object, for example: a mobile phone of a vehicle owner of an accident object, a vehicle-mounted computer of an accident vehicle, and the like.
Optionally, the analysis device may also send the analysis result to the terminal associated with the accident object, and may also receive feedback information returned by the terminal associated with the accident object, where the feedback information may indicate whether the party involved in the accident object disagrees with the accident responsibility determination information in the analysis result.
The method described in S501-S504 analyzes the geographical track information and the environmental information of the object on the traffic road to determine the accident object, and further analyzes the accident object to obtain an analysis result, so as to accurately identify the traffic accident, and further automatically obtain more and deeper information of the traffic accident, thereby assisting the traffic police to efficiently and orderly handle the traffic accident, greatly improving the handling efficiency of the traffic accident, and accelerating the operation of the traffic system.
In some embodiments of the present application, the analysis device may also present the analysis result of the traffic accident to the user through the GUI. In other embodiments, after the analysis device sends the traffic accident information and the analysis result to the management platform, the management platform may present the traffic accident information and the analysis result to the user through the GUI. The management platform can be a display device provided with a traffic management client or a cloud platform. In other embodiments, the analysis apparatus may further send the information of the traffic accident and the analysis result to a terminal associated with the accident object (for example, a mobile phone, a computer, etc. of the accident party, or a terminal device installed on the accident object, such as a vehicle-mounted computer), so that the accident party can visually obtain the information of the traffic accident and the analysis result through the GUI.
Fig. 6 is an exemplary GUI display interface of the analysis result, and the GUI interface shown in fig. 6 may be presented by the analysis apparatus or the management platform. As shown in fig. 6, the content displayed on the GUI interface includes four main parts, namely an evidence video, an accident detail image, an accident auxiliary responsibility determination and a traffic influence report.
The evidence video can be played at a slow speed, a fast speed and a normal speed on the GUI interface according to the operation of a user. The GUI interface can also display accident detail images, and the accident detail images can be images formed by the background automatically amplifying and/or cutting according to video frames corresponding to accident details which may be concerned by users in the evidence videos. The accident detail image may also be an image obtained by autonomously selecting an image in the evidence video through operations such as clicking, sliding and the like when the user watches the evidence video and amplifying the image.
Optionally, the accident detail image may also include images of multiple angles in the same place and scene. For example: the analysis device can search different videos shot by different cameras according to a certain key video frame in the evidence video (the key video frame can be automatically identified by the analysis device or can be designated by a user), so as to obtain video frames which are shot by different cameras at the same time and have the same scene but different angles with the key video frame, and further obtain accident detail images of multiple angles in the scene. When the accident detail images are displayed, the accident detail images in multiple angles can be spliced, so that the accident detail images in multiple angles are simultaneously displayed on one interface. Accident detail images at different angles can also be displayed respectively.
Optionally, after obtaining the accident detail images of multiple angles, mapping and fusing the accident detail images, for example: the position of the object in the accident detail image of each angle can be mapped into a three-dimensional map, so that the accident detail images of multiple angles are fused to form a three-dimensional accident detail image which can be adjusted by different angles. By adjusting the angle of the three-dimensional accident detail image, the accident detail image of a non-camera shooting angle can be obtained, for example: a top view of the incident details of the scene, a view of the incident details of the angle at which the camera is not mounted, may be obtained.
The accident auxiliary responsibility determination part comprises various information such as time, place, accident type, accident cause, responsibility determination advice, laws and regulations of responsibility determination basis and the like of accident occurrence, so that a user can comprehensively know the traffic accident. The traffic impact reporting section includes relevant information for evaluation of the impact value and the impact degree, and the impact degree and the impact value of the traffic accident obtained by the evaluation. The user can visually and conveniently obtain the influence information of the traffic accident.
It should be understood that the content displayed in the GUI interface of fig. 6 is only an exemplary description of the present application. The presenting method of the traffic accident analysis result is various, and the present application is not limited specifically. For example: the information of the four parts in the GUI interface in fig. 6 can be presented through a plurality of interfaces, and the presentation of each interface can be controlled and selected by the user. For another example, only part of the information presented in FIG. 6 may be presented when presenting, such as; only the video evidence, time, location, cause information, and impact values are presented simply.
In one implementation, the GUI interface shown in FIG. 6 above is presented to a traffic manager by an analysis device or traffic management platform. When presenting to a traffic manager, the system can present part or all of the content of the traffic influence report through an interface, and further present part or all of evidence videos, accident detail images or the content of the accident auxiliary liability assignment according to the selection of the traffic manager.
In another implementation, the GUI interface shown in FIG. 6 above is presented by the analysis device or to the party to the incident. When presenting to the accident party, part or all of the content of the accident auxiliary liability determination can be presented firstly, and then evidence videos or accident detail images are presented according to the selection of the accident party.
The analysis device of the application can have different embodiments when the traffic accident analysis method is specifically executed according to different application scenes. The method shown in the foregoing fig. 5 is a method for discovering and analyzing traffic accidents on global traffic roads by an analysis device in a fully automatic and real-time manner. In other scenes, the analysis device can analyze the traffic accident according to the reported information of the traffic accident. Another embodiment is described in detail below in conjunction with fig. 7.
S701, the analysis device receives reported information of the traffic accident, wherein the reported information comprises geographical position information.
When a traffic accident occurs on a traffic road, the parties involved in the traffic accident (or surrounding citizens) can report the traffic accident through a software application or a webpage client installed on a terminal, and the reported information may include: the information of the geographical position of the traffic accident, the time information of the traffic accident, and the related information of the object involved in the accident (such as the number of the objects, the type of the objects, the license plate information, etc.).
S702, the analysis device automatically searches related videos according to the received reported information.
Specifically, after receiving the report information, the analysis device may call, according to the geographical location information in the report information, a video shot by a camera in an area corresponding to the geographical location, where the called video may be a video within a predetermined time period before the report information is received. Optionally, the analysis device may also retrieve information recorded by sensors such as radar after receiving the reported information. The searched related video may also be referred to as the first video
And S703, analyzing the related video by the analysis device to determine the analysis result of the traffic accident.
The method for analyzing the relevant video by the analysis device and the obtained relevant analysis result may be the same as the foregoing steps S502 and S503, and are not described herein again.
And S704, the analysis device generates an accident liability assignment report according to the obtained analysis result, and sends the accident liability assignment report to a terminal reporting information.
The accident liability report comprises: accident responsibility judgment information (for example, for a traffic accident that a rear vehicle suddenly accelerates to cause a rear-end preceding vehicle, the rear vehicle is judged to be responsible, and the front vehicle is not responsible). Optionally, the accident liability assignment report may further include: accident evacuation advice, accident types and descriptions, accident cause descriptions, regulations according to which accidents are settled, evidence videos and accident detail images.
Optionally, the analysis device may further determine the parties involved in the traffic accident according to the information of the objects involved in the traffic accident included in the report information, and send the accident liability report to terminals related to all the parties involved in the traffic accident, for example: the terminal of the owner corresponding to the license plate can be inquired according to the license plate information of the accident object included in the reported information, and the accident liability assessment report is sent to the terminal of the owner. Or the analysis device can also determine the accident object of the traffic accident according to the called video, further determine the driver of the accident object, search the terminal related to the driver, and send the accident liability determination report to the terminal of the driver.
S705, the analysis device receives feedback information of the terminal on the accident liability assignment report, and if the feedback information indicates that the terminal does not make a disagreement with the accident liability judgment information in the accident liability assignment report, the analysis device does not trigger an alarm.
Specifically, the analysis device may determine not to trigger the alarm after receiving unanimous feedback information sent by all accident parties of the traffic accident through the terminal.
Optionally, in this step, the analysis device may further monitor whether the accident party performs accident evacuation according to the accident evacuation recommendation within a predetermined time, and if the accident party does not perform accident evacuation according to the accident evacuation recommendation after time out, the analysis device may send a notification to a traffic police management platform nearby the accident site, and a traffic manager performs site evacuation or site enforcement.
Optionally, after the analysis device receives feedback information that the terminal disagrees the accident responsibility judgment information in the accident responsibility determination report, the analysis device may further automatically send the related information of the traffic accident or part or all of the contents in the accident responsibility determination report, or part or all of the information in the analysis result to the insurance platform according to an instruction of the party, so that the insurance platform performs claim settlement on the traffic accident.
And S706, the analysis device receives feedback information of the accident responsibility determination report from the terminal, and if the feedback information indicates that the accident responsibility determination information in the accident responsibility determination report is disagreeable by the terminal, the analysis device sends the obtained analysis result to the traffic management platform. The presentation of the analysis results on the interface of the traffic management platform may be as described above for fig. 6. The analysis result can include a traffic influence report, and a traffic manager can decide whether to take an alarm for field processing and/or the response speed of taking the alarm for field processing according to the influence degree of the traffic accident in the traffic influence report.
Optionally, when the analysis device needs to analyze and process multiple traffic accidents, a reporting priority may be generated according to the accident influence degree in the analysis result (the higher the severity is, the higher the reporting priority is), and the analysis result is reported to the traffic management platform according to the priority.
It should be understood that the above steps S701-S706 are embodiments of the analysis apparatus of the present application in another application scenario, and reference may be made to the descriptions of the foregoing steps S501-S504 for the parts not described in each step S701-S706.
As shown in fig. 4, the present application provides an analysis apparatus 300, where the analysis apparatus 300 may include a data processing module 301, an accident discovery module 302, an accident analysis module 303, a result transmission module 304, and a result display module 305. The analysis device 300 may perform some or all of the methods described in S501-S504 above. For example:
the data processing module 301 is configured to obtain geographical trajectory information of at least one object on a traffic road;
an accident discovery module 302 for determining an accident object according to the geographical trajectory information of at least one object;
an accident analysis module 303, configured to obtain environment information of an accident object; and obtaining an analysis result of the traffic accident according to the geographical track information of the accident object and the environmental information of the accident object.
Optionally, the analysis result includes accident responsibility determination information.
Optionally, the result sending module 304 is configured to send the analysis result to a terminal associated with the accident object.
Optionally, the result sending module 304 is further configured to receive feedback information sent by the terminal associated with the accident object, where the feedback information indicates that there is an objection or no objection in the accident responsibility determination information.
Optionally, the analysis result includes one or any combination of the following information: the system comprises time information of accidents, accident types and descriptions, accident causes, evidence videos and accident detail images, wherein the evidence videos are used for showing the causes or processes of the traffic accidents.
Optionally, the result sending module 304 is further configured to send the analysis result to a management platform.
Optionally, the result sending module 304 is further configured to receive an accident feedback result sent by the management platform; and sending the accident feedback result to a terminal associated with the accident object.
Optionally, the analysis result includes: the influence degree of the traffic accident comprises: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
Optionally, the result sending module 304 is further configured to notify a third party of the influence degree of the traffic accident.
Optionally, the accident discovery module 304 is configured to determine an abnormal trajectory according to the geographical trajectory information of the at least one object; and determining the accident object by using the abnormal track and a pre-trained artificial intelligence AI model.
Optionally, the data processing module 301 is configured to obtain a video acquired by at least one camera disposed on the traffic road; and acquiring the geographical track information of the at least one object according to the video, wherein the geographical track information of each object comprises track information of the object in a road section area shot by at least one camera.
Optionally, the data processing module 301 is further configured to: and receiving reported information sent by a terminal, wherein the reported information comprises the geographical position information of the traffic accident and/or the time information of the traffic accident.
Optionally, the data processing module 301 is configured to: acquiring a first video in videos collected by at least one camera arranged on the traffic road according to the reported information, wherein the first video corresponds to the traffic accident occurring on the traffic road; and acquiring the geographical track information of at least one object on the traffic road according to the first video.
Optionally, the environmental information of the accident object includes one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
The present application also provides an analysis apparatus 800, where the analysis apparatus 800 may be a hardware device or a software device, and the analysis apparatus 800 may execute some or all of the methods in S701 to S706. For example: the analysis device 800 may include:
an obtaining module 801, configured to obtain a first video, where the first video is collected by at least one camera disposed on a traffic road, and the first video corresponds to a traffic accident occurring on the traffic road.
An analysis module 802, configured to obtain an analysis result of the traffic accident according to the first video.
A providing module 803 is used for providing the analysis result of the traffic accident.
Optionally, the analysis result includes accident responsibility judgment information.
Optionally, the providing module 803 is configured to send the analysis result to a terminal associated with the accident object.
Optionally, the providing module 803 is further configured to receive feedback information sent by a terminal associated with an accident object, where the feedback information indicates that there is an objection or no objection to the accident responsibility determination information.
Optionally, the analysis results may include one or more of the following information:
accident time information, accident type and description, accident cause, evidence video, and accident detail image, wherein the evidence video is used for showing the cause or process of the traffic accident.
Optionally, the providing module 803 is further configured to send the analysis result to the management platform.
Optionally, the providing module 803 is further configured to receive an accident feedback result sent by the management platform; and sending the accident feedback result to a terminal associated with the accident object.
Optionally, the analysis result includes: the influence degree of the traffic accident comprises: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
Optionally, the providing module 803 is further configured to notify a third party of the degree of influence of the traffic accident.
Optionally, the analysis module 802 is configured to determine an abnormal trajectory according to the geographical trajectory information of at least one object; and determining the accident object by using the abnormal track and a pre-trained artificial intelligence AI model.
Optionally, the obtaining module 801 is configured to obtain a video collected by at least one camera disposed on the traffic road; and acquiring the geographical track information of the at least one object according to the video, wherein the geographical track information of each object comprises track information of the object in a road section area shot by at least one camera.
Optionally, the obtaining module 801 is further configured to: and receiving reported information sent by a terminal, wherein the reported information comprises the geographical position information of the traffic accident and/or the time information of the traffic accident.
Optionally, the obtaining module 801 is configured to: acquiring a first video in videos collected by at least one camera arranged on the traffic road according to the reported information, wherein the first video corresponds to the traffic accident occurring on the traffic road; and acquiring the geographical track information of at least one object on the traffic road according to the first video.
Optionally, the environmental information of the accident object includes one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
The present application further provides a management platform, which includes a display interface, and the display interface is used for presenting the content of the foregoing fig. 6 and the description related to fig. 6.
The application also provides a traffic accident analysis system, which comprises at least one camera arranged on a traffic road and an analysis device shown in figure 4 or figure 8;
and the analysis device is used for executing the traffic accident analysis method corresponding to the method embodiment in the figure 5 and/or the figure 7 according to the acquired video.
The present application also provides a computing device as described in fig. 2, or a computing device system as described in fig. 3. The computing device or the computing device system may be deployed on the cloud, or may be deployed on an edge side or a terminal side. The computing device or computing device system includes the aforementioned memory and processor, and the processor executes the instructions in the memory to execute the aforementioned method embodiments corresponding to fig. 5 and/or fig. 7.
The descriptions of the flows corresponding to the above-mentioned figures have respective emphasis, and for parts not described in detail in a certain flow, reference may be made to the related descriptions of other flows.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in, or transmitted from one computer-readable storage medium to another computer-readable storage medium, the computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device including one or more available media, such as a magnetic medium (e.g., floppy disks, hard disks, magnetic tapes), an optical medium (e.g., DVDs), or a semiconductor medium (e.g., SSDs), etc.

Claims (44)

1. A traffic accident analysis method, comprising:
acquiring geographical track information of at least one object on a traffic road;
determining an accident object according to the geographical track information of the at least one object;
acquiring environmental information of the accident object;
and obtaining an analysis result of the traffic accident according to the geographical track information of the accident object and the environmental information of the accident object.
2. The method of claim 1, wherein the analysis results include incident liability determination information.
3. The method of claim 2, further comprising:
and sending the analysis result to a terminal associated with the accident object.
4. The method of claim 3, further comprising:
and receiving feedback information sent by a terminal associated with the accident object, wherein the feedback information indicates that disagreement exists or no disagreement exists in the accident responsibility judgment information.
5. The method according to any one of claims 1 to 4, wherein the analysis result comprises one or any combination of the following information:
accident time information, accident type and description, accident cause, evidence video, and accident detail image, wherein the evidence video is used for showing the cause or process of the traffic accident.
6. The method of claim 5, further comprising:
and sending the analysis result to a management platform.
7. The method of claim 6, further comprising:
receiving an accident feedback result sent by the management platform;
and sending the accident feedback result to a terminal associated with the accident object.
8. The method of any one of claims 1-6, wherein the analysis results comprise: the influence degree of the traffic accident comprises: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
9. The method of claim 8, further comprising: and informing a third party of the influence degree of the traffic accident.
10. The method according to any one of claims 1-9, wherein determining an accident object based on the geographical trajectory information of the at least one object comprises:
determining an abnormal track according to the geographical track information of the at least one object;
and determining the accident object by using the abnormal track and a pre-trained artificial intelligence AI model.
11. The method according to any one of claims 1-10, wherein the obtaining geographic trajectory information of at least one object on a traffic road comprises:
acquiring videos collected by at least one camera arranged on the traffic road;
and acquiring the geographical track information of the at least one object according to the video, wherein the geographical track information of each object comprises track information of the object in a road section area shot by at least one camera.
12. The method according to any one of claims 1-10, wherein prior to obtaining geographical trajectory information for at least one object on a traffic road, the method further comprises:
and receiving reported information sent by a terminal, wherein the reported information comprises the geographical position information of the traffic accident and/or the time information of the traffic accident.
13. The method of claim 12, wherein the obtaining geographic trajectory information of at least one object on a traffic road comprises:
acquiring a first video in videos collected by at least one camera arranged on the traffic road according to the reported information, wherein the first video corresponds to the traffic accident occurring on the traffic road;
and acquiring the geographical track information of at least one object on the traffic road according to the first video.
14. The method according to any one of claims 1 to 13, wherein the environmental information of the accident object comprises one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
15. A traffic accident analysis method, comprising:
acquiring a first video, wherein the first video is acquired by at least one camera arranged on a traffic road, and the first video corresponds to a traffic accident occurring on the traffic road;
acquiring an analysis result of the traffic accident according to the first video;
providing an analysis result of the traffic accident.
16. The method of claim 15, further comprising:
acquiring reported information sent by a terminal, wherein the reported information comprises geographical position information of the occurrence of the traffic accident or time information of the occurrence of the traffic accident, or information of a camera for collecting the first video or an accident object of the traffic accident;
the acquiring the first video comprises: and acquiring the first video from the videos acquired by the at least one camera according to the reported information.
17. A method according to claim 15 or 16, wherein the analysis results comprise incident liability determination information.
18. The method of claim 17, wherein providing the analysis of the traffic accident comprises:
and sending the analysis result to a terminal associated with the accident object.
19. The method of claim 18, further comprising:
and receiving feedback information sent by a terminal associated with the accident object, wherein the feedback information indicates that disagreement exists or no disagreement exists in the accident responsibility judgment information.
20. The method according to any one of claims 15-19, wherein the analysis result comprises one or any combination of the following information:
accident time information, accident type and description, accident cause, evidence video, and accident detail image, wherein the evidence video is used for showing the cause or process of the traffic accident.
21. The method of claim 20, wherein providing the analysis of the traffic accident comprises:
and sending the analysis result to a management platform.
22. The method of claim 21, further comprising:
receiving an accident feedback result sent by the management platform;
and sending the accident feedback result to a terminal associated with the accident object.
23. The method of any one of claims 15-22, wherein the analysis results comprise: the influence degree of the traffic accident comprises: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
24. The method of claim 23, wherein providing the analysis of the traffic accident comprises: and informing a third party of the influence degree of the traffic accident.
25. The method according to any one of claims 15-24, wherein the obtaining the analysis result of the traffic accident according to the first video comprises:
acquiring geographical track information of the accident object according to the first video;
acquiring environmental information of the accident object;
and obtaining an analysis result of the traffic accident according to the geographical track information of the accident object and the environmental information of the accident object.
26. The method according to claim 25, wherein the environmental information of the accident object comprises one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
27. An apparatus for analyzing a traffic accident, comprising:
the data processing module is used for acquiring the geographical track information of at least one object on the traffic road;
the accident finding module is used for determining an accident object according to the geographical track information of the at least one object;
and the accident analysis module is used for acquiring the environmental information of the accident object and acquiring the analysis result of the traffic accident according to the geographical track information of the accident object and the environmental information of the accident object.
28. The apparatus of claim 27, wherein the analysis results comprise incident liability determination information.
29. The apparatus of claim 28, further comprising: the result of the transmission module is that,
and the result sending module is used for sending the analysis result to the terminal associated with the accident object.
30. The apparatus of claim 29,
the result sending module is further configured to receive feedback information sent by the terminal associated with the accident object, where the feedback information indicates that there is an objection or no objection to the accident responsibility determination information.
31. The apparatus according to any one of claims 27-30, wherein the analysis result comprises one or any combination of the following information:
accident time information, accident type and description, accident cause, evidence video, and accident detail image, wherein the evidence video is used for showing the cause or process of the traffic accident.
32. The apparatus of claim 31,
and the result sending module is also used for sending the analysis result to a management platform.
33. The apparatus of claim 32,
the result sending module is also used for receiving the accident feedback result sent by the management platform;
and sending the accident feedback result to a terminal associated with the accident object.
34. The apparatus of any one of claims 27-33, wherein the analysis results comprise: the influence degree of the traffic accident comprises: a degree of traffic jam impact or a collision intensity of the accident object or a degree of damage of the accident object.
35. The apparatus of claim 34,
and the result sending module is also used for notifying the influence degree of the traffic accident to a third party.
36. The apparatus of any one of claims 27-35,
the accident finding module is used for determining an abnormal track according to the geographical track information of the at least one object; and determining the accident object by using the abnormal track and a pre-trained artificial intelligence AI model.
37. The device according to any one of claims 27 to 36, wherein the data processing module is configured to obtain video captured by at least one camera disposed on the traffic road; and acquiring the geographical track information of the at least one object according to the video, wherein the geographical track information of each object comprises track information of the object in a road section area shot by at least one camera.
38. The apparatus of any one of claims 27-36,
the data processing module is further configured to: and receiving reported information sent by a terminal, wherein the reported information comprises the geographical position information of the traffic accident and/or the time information of the traffic accident.
39. The apparatus of claim 38,
the data processing module is used for: acquiring a first video in videos collected by at least one camera arranged on the traffic road according to the reported information, wherein the first video corresponds to the traffic accident occurring on the traffic road; and acquiring the geographical track information of at least one object on the traffic road according to the first video.
40. The apparatus according to any one of claims 27-39, wherein the environmental information of the accident object comprises one or any combination of the following information: geographical trajectory information of a surrounding object associated with the accident object, surrounding road environment information associated with the accident object, and traffic sign information of a traffic road on which the accident object is located.
41. An apparatus for analyzing a traffic accident, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first video, the first video is acquired by at least one camera arranged on a traffic road, and the first video corresponds to a traffic accident on the traffic road;
the analysis module is used for acquiring an analysis result of the traffic accident according to the first video;
and the providing module is used for providing the analysis result of the traffic accident.
42. A computing device, characterized in that the computing device comprises a processor and a memory, the memory storing computer instructions, the processor executing the computer instructions to cause the computing device to perform the method of any of the preceding claims 1-26.
43. A computer-readable storage medium, characterized in that the computer-readable storage medium stores computer program code which, when executed by a computing device, performs the method of any of the preceding claims 1 to 26.
44. A system for traffic accident analysis, characterized in that it comprises at least one camera and analysis means arranged on a traffic route;
the at least one camera arranged on the traffic route for capturing video and sending the video to the analysis device, the analysis device being configured to perform the method according to any one of the preceding claims 1 to 26 on the basis of the captured video.
CN202011056050.3A 2020-05-14 2020-09-30 Traffic accident analysis method, device and equipment Pending CN113674523A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/076689 WO2021227586A1 (en) 2020-05-14 2021-02-18 Traffic accident analysis method, apparatus, and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020104095504 2020-05-14
CN202010409550 2020-05-14

Publications (1)

Publication Number Publication Date
CN113674523A true CN113674523A (en) 2021-11-19

Family

ID=78537992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011056050.3A Pending CN113674523A (en) 2020-05-14 2020-09-30 Traffic accident analysis method, device and equipment

Country Status (1)

Country Link
CN (1) CN113674523A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596704A (en) * 2022-03-14 2022-06-07 阿波罗智联(北京)科技有限公司 Traffic event processing method, device, equipment and storage medium
CN114743373A (en) * 2022-03-29 2022-07-12 北京万集科技股份有限公司 Traffic accident handling method, apparatus, device, storage medium and program product
CN114999149A (en) * 2022-05-21 2022-09-02 北京中软政通信息技术有限公司 Traffic accident data rapid acquisition method, device, equipment, system and medium
CN116469254A (en) * 2023-06-15 2023-07-21 浙江口碑网络技术有限公司 Information processing method and device
CN116828157A (en) * 2023-08-31 2023-09-29 华路易云科技有限公司 Traffic accident responsibility judgment auxiliary system and method for automatic driving environment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596704A (en) * 2022-03-14 2022-06-07 阿波罗智联(北京)科技有限公司 Traffic event processing method, device, equipment and storage medium
CN114596704B (en) * 2022-03-14 2023-06-20 阿波罗智联(北京)科技有限公司 Traffic event processing method, device, equipment and storage medium
CN114743373A (en) * 2022-03-29 2022-07-12 北京万集科技股份有限公司 Traffic accident handling method, apparatus, device, storage medium and program product
CN114743373B (en) * 2022-03-29 2023-10-13 北京万集科技股份有限公司 Traffic accident handling method, device, equipment and storage medium
CN114999149A (en) * 2022-05-21 2022-09-02 北京中软政通信息技术有限公司 Traffic accident data rapid acquisition method, device, equipment, system and medium
CN116469254A (en) * 2023-06-15 2023-07-21 浙江口碑网络技术有限公司 Information processing method and device
CN116469254B (en) * 2023-06-15 2023-09-08 浙江口碑网络技术有限公司 Information processing method and device
CN116828157A (en) * 2023-08-31 2023-09-29 华路易云科技有限公司 Traffic accident responsibility judgment auxiliary system and method for automatic driving environment
CN116828157B (en) * 2023-08-31 2023-12-29 华路易云科技有限公司 Traffic accident responsibility judgment auxiliary system for automatic driving environment

Similar Documents

Publication Publication Date Title
US11074813B2 (en) Driver behavior monitoring
WO2021004077A1 (en) Method and apparatus for detecting blind areas of vehicle
Tian et al. An automatic car accident detection method based on cooperative vehicle infrastructure systems
CN113674523A (en) Traffic accident analysis method, device and equipment
US11840239B2 (en) Multiple exposure event determination
US11380105B2 (en) Identification and classification of traffic conflicts
WO2021227586A1 (en) Traffic accident analysis method, apparatus, and device
US11322018B2 (en) Determining causation of traffic events and encouraging good driving behavior
JP7371157B2 (en) Vehicle monitoring method, device, electronic device, storage medium, computer program, cloud control platform and roadway coordination system
US10933881B1 (en) System for adjusting autonomous vehicle driving behavior to mimic that of neighboring/surrounding vehicles
US11836985B2 (en) Identifying suspicious entities using autonomous vehicles
SA520420162B1 (en) Early warning and collision avoidance
US9760783B2 (en) Vehicle occupancy detection using passenger to driver feature distance
CN110602449A (en) Intelligent construction safety monitoring system method in large scene based on vision
CN113255439B (en) Obstacle identification method, device, system, terminal and cloud
CN114248819B (en) Railway intrusion foreign matter unmanned aerial vehicle detection method, device and system based on deep learning
CN113658427A (en) Road condition monitoring method, system and equipment based on vision and radar
CN115618932A (en) Traffic incident prediction method and device based on internet automatic driving and electronic equipment
CN114694060A (en) Road shed object detection method, electronic equipment and storage medium
CN111383248A (en) Method and device for judging red light running of pedestrian and electronic equipment
Dinh et al. Development of a tracking-based system for automated traffic data collection for roundabouts
Suttiponpisarn et al. Detection of wrong direction vehicles on two-way traffic
CN114078319A (en) Method and device for detecting potential hazard site of traffic accident
Ismail et al. Automated pedestrian safety analysis using video data in the context of scramble phase intersections
Bordia et al. Automated traffic light signal violation detection system using convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination