CN112926575A - Traffic accident recognition method, device, electronic device and medium - Google Patents

Traffic accident recognition method, device, electronic device and medium Download PDF

Info

Publication number
CN112926575A
CN112926575A CN202110089174.XA CN202110089174A CN112926575A CN 112926575 A CN112926575 A CN 112926575A CN 202110089174 A CN202110089174 A CN 202110089174A CN 112926575 A CN112926575 A CN 112926575A
Authority
CN
China
Prior art keywords
traffic accident
time
image
accident
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110089174.XA
Other languages
Chinese (zh)
Inventor
许鹏飞
王智慧
邢腾飞
白冰
周琦
胡润波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202110089174.XA priority Critical patent/CN112926575A/en
Publication of CN112926575A publication Critical patent/CN112926575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • G07C5/085Registering performance data using electronic data carriers
    • G07C5/0866Registering performance data using electronic data carriers the electronic data carrier being a digital video recorder in combination with video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

An embodiment according to the present disclosure provides a traffic accident recognition method, apparatus, electronic device, storage medium, and program product, and relates to the field of intelligent transportation. The method may include determining a time and location of a traffic accident based on accident indication messages reported by vehicles. The method further includes determining an image recording device based on the time and the location. Further, the method may include sending an image capture instruction to the image recording device to cause the image recording device to return an image associated with the time. The method may further include determining a recognition result of the traffic accident based on the image. The technical scheme disclosed by the invention can accurately, quickly and low-cost identify and examine the traffic accidents reported by the vehicles, thereby obviously improving the traffic conditions.

Description

Traffic accident recognition method, device, electronic device and medium
Technical Field
Implementations of the present disclosure relate generally to the field of intelligent traffic and, more particularly, to a traffic accident recognition method, apparatus, electronic device, computer-readable storage medium, and computer program product.
Background
The current social traffic volume is sharply increased, and traffic accidents often happen to cause long-time traffic jam. For the current and future intelligent traffic times, how to quickly find and broadcast accidents is a big problem. The existing accident mining schemes are mainly divided into two categories: one is relying on government to provide data, which has high accuracy, but limited data volume and poor timeliness; the other is actively reported by the user, and the method has high coverage rate but low accuracy and depends on active cooperation of the user side. Therefore, there is a need to develop an automatic, large-scale, high-accuracy traffic accident mining mechanism.
Disclosure of Invention
According to an embodiment of the present disclosure, a solution for identifying a traffic accident is provided.
In a first aspect of the present disclosure, a traffic accident identification method is provided. The method includes determining a time and a location of a traffic accident based on accident indication messages reported by vehicles. The method further includes determining an image recording device based on the time and the location. Further, the method includes sending an image capture instruction to the image recording device to cause the image recording device to return an image associated with the time. The method also includes determining a recognition result of the traffic accident based on the image.
In a second aspect of the present disclosure, a traffic accident identification method is provided. The method includes detecting a target object in a traffic accident. The method further includes generating an incident indication message for reporting based on the time information and the location information associated with the target object. In addition, the method includes sending the image in accordance with a determination that a request for the image in which the target object is located is received.
In a third aspect of the present disclosure, a traffic accident recognition device is provided. The device includes: the time and position determining module is configured to determine the time and the position of the traffic accident based on the accident indication message reported by the user; an image recording device determination module configured to determine an image recording device based on time and location; the image acquisition instruction sending module is configured to send an image acquisition instruction to the image recording device so as to enable the image recording device to return an image associated with time; and a recognition result determination module configured to determine a recognition result of the traffic accident based on the image.
Accident indication message in a fourth aspect of the present disclosure, there is provided an electronic device comprising: a memory and a processor; wherein the memory is for storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method according to the first or second aspect of the disclosure.
In a fifth aspect of the present disclosure, there is provided a computer readable storage medium having one or more computer instructions stored thereon, wherein the one or more computer instructions are executed by a processor to implement a method according to the first or second aspect of the present disclosure.
In a sixth aspect of the present disclosure, there is provided a computer program product comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method of the first or second aspect of the present disclosure.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The features, advantages and other aspects of various implementations of the present disclosure will become more apparent from the following detailed description when taken in conjunction with the accompanying drawings, which illustrate, by way of example and not by way of limitation, several implementations of the present disclosure. In the drawings:
FIG. 1 shows a schematic block diagram of an example system of identifying a traffic accident of an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a more detailed example environment in which embodiments of the present disclosure can be implemented;
FIG. 3 shows a flow diagram of a traffic accident identification process, according to an embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of a more detailed example environment in which embodiments of the present disclosure can be implemented.
Fig. 5 shows a flow chart of a traffic accident identification process according to an embodiment of the present disclosure.
FIG. 6 shows a schematic diagram of a process for determining the time and location of occurrence of a traffic accident through aggregation operations according to an embodiment of the present disclosure
FIG. 7 illustrates a high-level piping diagram of a process of identifying a traffic accident according to an embodiment of the present disclosure.
FIG. 8 shows a block diagram of a traffic accident recognition device, according to an embodiment of the present disclosure; and
FIG. 9 schematically illustrates a block diagram of a computing device in accordance with an exemplary implementation of the present disclosure.
Detailed Description
Preferred implementations of the present disclosure will be described in more detail below with reference to the accompanying drawings. While a preferred implementation of the present disclosure is shown in the drawings, it should be understood that the present disclosure may be implemented in various forms and should not be limited by the implementations set forth herein. Rather, these implementations are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The term "include" and variations thereof as used herein is meant to be inclusive in an open-ended manner, i.e., "including but not limited to". Unless specifically stated otherwise, the term "or" means "and/or". The term "based on" means "based at least in part on". The terms "one example implementation" and "one implementation" mean "at least one example implementation". The term "another implementation" means "at least one additional implementation". The terms "first," "second," and the like may refer to different or the same object. Other explicit and implicit definitions are also possible below.
In embodiments of the present disclosure, vehicles traveling on a road may use equipped sensors (e.g., tachographs) to capture images of traffic conditions in real time. Whether a traffic accident has occurred is recognized by a recognition model deployed on a vehicle for automatically recognizing a target object (e.g., a warning board such as a tripod, a cone, a pot hole, a car body fragment) related to the traffic accident, and an accident indication message is transmitted to a server. The sensing and identifying capability of a large number of vehicles running on the road is utilized, a larger range can be covered, so that the period of digging and detecting traffic accidents is obviously prolonged, a driver can obtain traffic accident information earlier and take corresponding measures of avoiding, reducing speed and the like, and the traffic efficiency of urban roads is improved. A traffic accident identification scheme according to an embodiment of the present disclosure is described in more detail below with reference to the accompanying drawings.
Fig. 1 shows a schematic block diagram of an example system 100 for identifying traffic accidents according to an embodiment of the present disclosure. System 100 includes a vehicle 110 and a server 120, with vehicle 110 and server 120 connected via a network 130. The vehicle 110 may be any type of motor vehicle that travels on a road, such as a passenger car, a Sport Utility Vehicle (SUV), a bus, a truck, and the like. The vehicle 110 includes an image pickup unit 111 such as a car recorder mounted at the front of the vehicle, a number of environment-sensing cameras mounted around the body of the vehicle for sensing a 360-degree field of view, and the like. The image capturing unit 111 may be configured to capture images of the environment around the vehicle, for example, a video including consecutive image frames, in real time, whereby it may be recognized whether there is a traffic accident around the vehicle based on the images. In addition to the image acquisition unit 111, the vehicle 100 may also comprise other types of sensors (not shown), such as ultrasonic sensors, (laser) radars, etc. The images acquired by the multiple sensors may be combined or fused together to form an image that may be used to detect traffic accidents.
The vehicle 110 includes a positioning unit 112, and the positioning unit 112 may be a device capable of providing location information in real time, such as a Global Positioning System (GPS) or a Global Navigation Satellite System (GNSS). With the aid of the positioning unit 112, it is possible to add vehicle position information to the image acquired by the image acquisition unit 111 as position information of the image and subsequently as a position of a traffic accident. However, the position information may not be added to the image. In this case, the location information acquired from the location unit 112 at the time of the traffic accident reported by the vehicle 110 may be used as the location of the traffic accident. Time information, such as a time stamp, may also be added to the image acquired by the image acquisition unit 111. Thus, the image acquired by the image acquisition unit 111 includes the image data itself and associated position information and time information. The location information may be GPS positioning information including longitude and latitude information. Additionally, the position information may also include lane information, for example, lane information of a traffic accident may be obtained according to the lane in which the current vehicle 110 is located and the lateral position of the target object in the image. For example, if the detected target object is located on the left side of the image, it may be considered that a traffic accident occurs in a lane on the left side of the current vehicle 110. When a traffic accident is posted, the lane location of the traffic accident may be indicated so that subsequent vehicles may merge into other lanes early.
The vehicle 110 further comprises a recognition model 113, which recognition model 113 may be deployed for detecting whether a target object is included in the image from the image acquisition unit 111. The target object can be a warning board (such as a tripod) at a traffic accident scene, a cone, a hollow or a vehicle body fragment and the like. The recognition model 113 may be a deep learning based neural network model trained to recognize several specified classes of target objects in an image. In particular, the recognition model 113 may extract a region of interest (typically a rectangular region) from the image and identify a class of objects therein. For example, the recognition model 113 may recognize a tripod in the image. The recognition model 113 may be configured as a lightweight neural network model, taking into account the limited computational capabilities of the vehicle 110. The lightweight neural network model may have fewer layers, have fewer nodes, or have fewer learning parameters than a recognition model deployed at a higher computing power computing device (e.g., server 120). Therefore, the recognition model 113 may have an advantage of low latency, so that it can be more quickly found whether the vehicle passes through a traffic accident scene.
Vehicle 110 also includes memory 114. The memory 114 may be a fixed or removable persistent storage device, such as a magnetic disk, fixed hard disk, Universal Serial Bus (USB) memory, memory card, or the like, having a capacity (e.g., 4GB, 8GB, 16GB, or more). The memory 111 may be connected to the image capturing unit 111 for receiving and storing images from the image capturing device 111. As described above, the images may have associated temporal information and location information that may be stored in the memory 114 along with the acquired image data. The memory may be accessed to transmit images within a specified time period to other devices (e.g., server 120). Due to the limited capacity of the memory 114, image data with an earlier timestamp may be erased or overwritten with a new image. In addition, as described above, for the image in which the target object in the traffic accident is detected, it may be set to be not erasable or to be overwritten for a certain period of time, that is, data having a higher value is retained.
When a target object in a traffic accident, e.g., a tripod is found, is detected in the captured image, the vehicle 110 may report the traffic accident to the server 120 via the network 130 using the communication unit 115. The communication unit 115 may be, for example, a bluetooth module, a cellular module, a vehicle networking (V2X) communication module, a Dedicated Short Range Communication (DSRC) module, or any other communication module with wireless communication capabilities. The vehicle 110 may transmit an accident indication message to the server 120, the accident indication message containing the location, time, severity, etc. of the occurrence of the traffic accident.
The server 120 may be an edge device deployed in a 5G environment or a cloud server located on the core network side, having greater computing power and storage capacity than the vehicle 110. The server 120 receives the accident indication message reported by the vehicles 110 within its coverage via its communication unit 121 and processes, e.g., verifies, the authenticity of the reported traffic accident.
The server 120 verifies the authenticity of the traffic accident using the identification model 122 to identify the traffic accident. Similarly, the recognition model 122 may be a deep learning based neural network model that is trained to recognize target objects of an image. In particular, the recognition model 122 extracts regions of interest from the image and identifies classes of objects therein. In general, the server 120 has a higher computational power than the vehicle 110, and thus the recognition model 122 may be more complex than the recognition module 113 on the vehicle 110, with a deeper number of layers, more neural network nodes, and more learning parameters, and thus also a higher accuracy. In some embodiments, the server 120 verifies the authenticity of the traffic accident with the help of the identification model 122 in response to the accident indication message reported by the vehicle 120.
In some embodiments, the server 120 may also request images of the traffic accident it has collected from a fixed image recording device (e.g., a roadside device) or a mobile image recording device (e.g., a law enforcement recorder of a vehicle or traffic manager located at the location within a corresponding time period) near the location where the traffic accident occurred and verify whether the images have a target object, and if so, the traffic accident may be identified as a real traffic accident.
The server 120 also includes a memory 123, and the memory 123 can maintain a historical library 124 of traffic accidents. The historian 124 records entries for traffic incidents that have passed validation, each traffic incident entry including attributes of the traffic incident such as location, time of occurrence, status, images, and the like. The location may be set to the location information of the traffic accident message reported by the vehicle 110 as described above, the occurrence time may be set to the time information of the traffic accident message reported by the vehicle 110, the status may indicate whether the traffic accident still exists or has been cleared, and the like. In some embodiments, the server 120 may compare the reported accident indication message with the traffic accidents in the historian, and if at least one of the location and time of the accident indication message is different from the existing records in the historian 124, a new traffic accident is considered to have occurred and will identify whether the traffic accident is a real traffic accident.
A traffic accident identification process according to an embodiment of the present disclosure is described in detail below with reference to fig. 2 to 5.
FIG. 2 illustrates a schematic diagram of a more detailed example environment 200 in which embodiments of the present disclosure can be implemented. The environment 200 includes a vehicle 220 that is traveling past a scene of a traffic accident 210, and a tachograph on the vehicle 220 continuously collects image data in front of the vehicle 220. The range of view of the automobile data recorder is shown by a dotted line in the figure. The accident site 210 houses a tripod 230 for warning of traffic behind it. According to statistical analysis, the accident scene in the urban traffic accident scene usually contains typical target objects or objects, such as tripods, cones and the like, and sometimes also contains road potholes or vehicle body fragments and the like formed by the accident. Therefore, the vehicle 220 may detect whether a traffic accident target object such as a tripod is included in the image acquired by the automobile data recorder, thereby automatically determining whether a traffic accident exists without user intervention.
Fig. 3 shows a flow diagram of a traffic accident identification process 300 according to an embodiment of the present disclosure. The process 300 may be adapted for execution at a vehicle. By identifying the traffic accident on the moving vehicle, the coverage rate and timeliness of the traffic accident reporting can be guaranteed.
At block 302, the vehicle 110 detects a target object in a traffic accident. The image capture unit 110 of the vehicle 110, such as a tachograph, may capture visual image information in front of the vehicle in real time and form an image or video frame. As described above, the acquired images may be one or more images in the form of image frames, which may have time stamps and location information. In this case, the position information of the image may be used as the time and position of the traffic accident. As described above, the image may not include the location information, and in this case, the location information at the time of the traffic accident reported by the vehicle 110 may be used as the location of the traffic accident. The one or more images are applied to the recognition module 113 to determine whether a target object is included therein. For example, the recognition model 113 may detect whether a tripod, a cone, a pothole, a car body fragment, and other common objects in a traffic accident scene are included in the image.
In certain embodiments, the recognition model 113 of the vehicle 110 may be a deep learning based neural network model. As described above, the recognition model 113 may be trained in advance to be suitable for extracting image features from an image in order to detect a region of interest, and classifying an object in the region of interest. As described above, the recognition model 113 may be a lightweight neural network model with a small number of parameters and low computational cost, so as to be deployed on vehicles with limited computational resources. The recognition module 113 may be configured to detect whether a tripod, a cone, a pothole, a car body fragment, etc. are included in the image that are common objects in a traffic accident scene. In some embodiments, the recognition module 113 may be configured to detect only whether a specific target object, such as a tripod, is included in the image, and calculate a probability that the specific target object exists, and output a result of detecting the specific target object when the probability exceeds a preset threshold. Additionally or alternatively, the recognition module 113 may be configured to detect more target objects, i.e. the recognition module 113 may detect the classes of objects in the image, determining the probability that they belong to various target objects. For example, the recognition module 113 may calculate a probability that an object in the image belongs to a tripod, a cone barrel, or the like category, and output a detection result based on the probability.
In certain embodiments, the captured images may be fed to the recognition model 113 of the vehicle 110 in real-time, and thus detected in real-time. Alternatively, the captured images may be stored in the memory 114 of the vehicle 110, and then the recognition model 113 accesses the memory 114 periodically (e.g., every 10 seconds, 30 seconds, or 1 minute, without limitation) to detect whether the stored images include the target object. When a target object related to a traffic accident is detected to be included in the image, the vehicle 110 considers that a traffic accident exists at the location, and based on this, an accident indication message is generated and transmitted.
At block 304, an incident indication message is generated for reporting the incident indication message based on the time information and the location information associated with the target object. Since target objects such as tripods may appear in successive images, traffic accidents need only be reported once for these images. For example, when the target object is detected in each of the plurality of images within the time period by the time stamp and the target object is not detected in any of the images earlier or later than the time period by the time stamp, the accident indication message is generated based on the images within the time period. The generated indication message of the traffic accident contains or indicates the location and time of the traffic accident in question.
In some embodiments, multiple images within the time period may be aggregated into a single image or video. For example, a new image or video may be generated as a representative image of the plurality of images in which the target object is detected. The new image or video may be stored in memory 114 of vehicle 110 for future access by server 120.
In some embodiments, the time and location information of the aggregated images may be included in the accident indication message as the time and location of the traffic accident identified by the vehicle. For example, the time of the traffic accident may be a time stamp of the earliest image, a time stamp of the latest image, or a time stamp of an image with a time stamp in the middle of which the target object is detected, but is not limited thereto. Similarly, the location of the traffic accident may detect location information of the earliest image, location information of the latest image, or location information of an image with a time stamp in the middle of the target object.
The vehicle 110 may send an accident indication message to the server 120. The accident indication message indicates the location and time of the traffic accident. Preferably, the incident indication message does not include an image, in order to save bandwidth resources. The accident indication message triggers the server 120 to identify the authenticity of the traffic accident. Accordingly, the server 120 may receive accident indication messages from a plurality of vehicles within its coverage area. It should be understood that accident indication messages that are the same or close in location and time may relate to the same traffic accident, and therefore the traffic accident need not be identified for each accident indication message. The server 120 may aggregate the accident indication messages based on the location and time of the traffic accident to determine the traffic accident. The server 120 may identify the authenticity of the traffic accident by detecting whether the target object is included in the image of the traffic accident. Accordingly, the server 120 requests the vehicle 110 for an image associated with the traffic accident. The request may contain a time corresponding to the accident indication message described above so that the vehicle 110 may retrieve the corresponding image from the image data it has stored.
In response, at block 306, the vehicle 110 transmits an image in accordance with a determination that a request for an image in which the target object is located is received. The image will be further recognized by the server 120 to confirm the authenticity of the traffic accident.
In this way, by utilizing the sensing and identifying capabilities of a large number of vehicles running on the road, a larger range can be covered, so that the period of excavating and detecting traffic accidents is obviously prolonged, a driver can obtain traffic accident information earlier and take corresponding measures of avoiding, reducing speed and the like, and the traffic efficiency of urban roads is improved.
The process of the server 120 identifying a traffic accident based on the accident indication message is described in detail below with reference to fig. 4 and 5.
FIG. 4 illustrates a schematic diagram of a more detailed example environment 400 in which embodiments of the present disclosure can be implemented. As shown in fig. 4, a traffic accident 410 occurs at an intersection. At this time, the vehicle 420 is traveling through the traffic accident 210. The vehicle 420 records the accident image and detects the tripod, thereby generating and sending an accident indication message to the server 120.
In some embodiments, the server 120 may determine all image recording devices that passed the location at that time based on the time and location information in the incident indication message. As shown in fig. 4, the image recording device may be a vehicle 430 that records the traffic accident 410 on the other side of the traffic accident 410, a law enforcement recorder 440 of a traffic manager working near the intersection, or a roadside apparatus 450 (e.g., a roadside camera) that photographs the entire course of the traffic accident 410. And more conveniently, since the vehicle 420 reporting the traffic accident 410 is also provided with a vehicle event recorder having image capturing and recording functions, the vehicle 420 can also be used as an image recording device to upload images associated with the traffic accident 410. Based on these images, the recognition model 122 in the server 120 can recognize the traffic accident.
Fig. 5 shows a flow diagram of a traffic accident identification process 500 according to an embodiment of the present disclosure. In some embodiments, process 500 may be implemented in server 120 shown in FIG. 1.
At 502, the server 120 may determine the time and location of the traffic accident 410 based on the accident indication message reported by the vehicle. As an example, an accident indication message is generated and sent to the server 120 when a vehicle 420 traveling on a road detects that a target object 460, such as a tripod, related to the traffic accident 410 is included in the images recorded by its tachograph. The time indicated by the incident indication message may generally be determined as a timestamp of the image in which the target object is located. Further, since the vehicle 420 is typically in close proximity to the traffic accident 410 while driving, the location at which the vehicle 420 was located when the accident indication message was generated (e.g., GPS location information generated by an onboard location unit) may be determined as the location of the traffic accident 410.
It should be appreciated that the same traffic accident may be reported several times since each vehicle 420 that approaches the accident scene and detects the target object in the image captured by its image capturing unit 111 may report the accident indication message. If these incident indication messages are determined to be a plurality of different incidents, a plurality of subsequent recognition tasks will be generated, thereby wasting more computing resources. To this end, in some embodiments, the number of reported incidents may be reduced by an aggregation operation. Fig. 6 shows a schematic diagram of a process 600 for determining the time and location of occurrence of a traffic accident through an aggregation operation in accordance with an embodiment of the present disclosure.
At 602, the server 120 may obtain time information and location information from the received incident indication message. For example, the accident indication message typically contains a timestamp of the detection of the image of the target object as the time of the traffic accident. In addition, since the vehicle in driving is usually closer to the traffic accident 410, the accident indication message may further include position information of the vehicle when the image of the target object is captured as the position of the traffic accident. In addition, the accident indication message may also contain identity information (e.g., device ID) of the vehicle that sent the accident indication message.
At 604, the server 120 may aggregate the accident indication message and the other accident indication messages to determine the traffic accident 410 based on the determined time information and location information and the corresponding time information and corresponding location information in the other accident indication messages. As an example, the server 120 may determine all accident indication messages of which location information is within an area having a radius of a predetermined distance as the same traffic accident through an aggregation operation. It should be understood that the aggregation operations described in this disclosure are merely exemplary, and multiple accident messages may also be determined to correspond to the same traffic accident by other means, such as machine learning models.
At 606, the server 120 may determine the time and location of the traffic accident 210 based on the time information and location information described above. Alternatively or additionally, the server 120 may also determine the time and location of the traffic accident 410 based on determining a vector sum of a plurality of locations corresponding to the plurality of accident indication messages by aggregating the plurality of accident indication messages. In this way, the present disclosure may aggregate reported incident indication messages into one traffic incident, thereby utilizing subsequent computing resources more efficiently.
In some embodiments, the reported traffic accidents can be rescheduled. As an example, the server 120 may compare the time and location of the aggregated traffic accident to the corresponding time and location of the historical traffic accident. When the time and the location are the same as or close to the time and the location of a certain traffic accident record in the history base (for example, the time is different from the preset time and the location is different from the preset range), the traffic accident is considered to be reported and identified as a real traffic accident. In this case, the traffic accident can be directly ignored and the recognition need not be repeated. In other words, if at least one of the time and the location is different from the corresponding time and the corresponding location of the historical traffic accident, the traffic accident may be determined as the traffic accident to be identified. In this way, whether the traffic accident reported by the user is the traffic accident which is already identified or issued can be judged before the image identification operation, so that the subsequent identification operation can be reduced, and the computing resource can be saved.
Returning to fig. 5, at 504, the server 120 may further determine an image recording device based on the determined time and location. In certain embodiments, the image recording device may be an image recording device of a vehicle, such as a tachograph of vehicle 420 or 430 in FIG. 4, or a roadside apparatus, such as roadside apparatus 450 in FIG. 4. As an example, the server 120 may determine a time period associated with the determined time and a road segment associated with the determined location.
For example, after the server 120 determines the time and location of the traffic accident 410 based on the accident indication message reported by the user, a time period of a predetermined length including the time may be determined, and a road segment or area of a predetermined area including the location may be determined. Thereafter, the server 120 searches or traverses the historical data for vehicles 420, 430 located in the road segment or area during the time period, and determines the image recording devices of the searched vehicles. In some embodiments, the server may request images from an image recording device of an additional vehicle or device different from the vehicle reporting the traffic accident. Alternatively or additionally, a law enforcement recorder 440 of a traffic manager located at the road segment or area during the time period, or a roadside device 450 (e.g., a roadside camera) that photographs the entire course of the traffic accident 410 may also be determined as an image recording device based on the big data. In this way, the server 120 may perform image recognition using traffic accident images recorded by a large number of vehicles traveling on the road, so that the accuracy of the recognition result 130 may be significantly improved.
At 506, the server 120 may send an image capture instruction to the image recording device to cause the image recording device to return an image associated with the time and the location. As an example, when a certain vehicle is determined to be an image recording device, the vehicle may retrieve images or video frames recorded at the time or a time period including the time from its memory and upload the images or video frames to the server 120.
At 508, the server 120 may determine the identification of the traffic accident 410 based on the uploaded images. In some embodiments, the images may be recognized by a trained recognition model 122 to produce recognition results such as the traffic accident 410 being a real traffic accident, the type of accident for the traffic accident 410, and the like. As an example, the server 120 may first determine a feature representation of the image. For example, vectorization or feature engineering may be performed on the uploaded image 140. The server 120 may then apply the determined feature representation to the trained recognition model 122. The recognition model 122 may determine the recognition result of the traffic accident 410 based on the image. It should be appreciated that the recognition model 122 may be trained by taking multiple reference feature representations as inputs and corresponding annotated reference recognition results as outputs.
Fig. 7 schematically illustrates a high-level piping diagram of a process 700 of identifying a traffic accident according to an embodiment of the present disclosure. In the embodiment shown in fig. 7, process 700 is performed by one or more vehicles 110 in conjunction with server 120.
At 701, vehicle 110 captures an image of traffic conditions. Various sensors on the vehicle 110, such as the image capture unit 111, may capture images of the vehicle surroundings in real time. The captured image may be a video comprising a series of frames. The images may be fed in real time to a recognition model 113 deployed on the vehicle and stored to the vehicle's memory 114 for later access. As described above, the image may also be added with time information (e.g., a time stamp) and location information (e.g., GPS positioning information).
At 702, the vehicle 110 detects a target object in an image. The vehicle 110 uses the deployed recognition model 113 to detect whether objects common to the scene of the accident, such as warning signs, cones, potholes, body debris, etc., are contained in the image. The recognition model 113 may be a deep learning based neural network model trained to recognize these objects in the images that are relevant to the traffic accident. Given the limited computational power of the vehicle 110, the recognition model 113 may be configured as a lightweight neural network model, with the advantage of low latency, so that it can be found more quickly whether the vehicle has passed through the scene of a traffic accident.
As described above, the captured traffic condition image may be in the form of a video including a plurality of frames. In this case, the target object, such as a tripod, appears in successive images, for which it will be understood that only one traffic accident has to be reported.
Thus, at 703, the vehicle 110 aggregates the detection results. For example, when a target object is detected in each of a plurality of images within a time period and no target object is detected in any of the images earlier or later than the time period, the time period corresponds to a traffic accident. Thereby, the detection results may be aggregated based on the time stamp of the image in which the target object is detected. The vehicle may then generate an accident indication message relating to the traffic accident during the time period. The accident indication message indicates the time and location of the traffic accident. The time may be any timestamp within the time period, such as a start time, a middle time, or an end time. The location may be an acquisition location of the corresponding image.
The vehicle 110 then sends an accident indication message to the server 120 at 704. As described above, the traffic accident indication includes the time and location of the traffic accident, but does not necessarily include the associated image. Therefore, when a plurality of vehicles report the traffic accident, the bandwidth resource can be saved.
Next, at 705, incident indication messages are aggregated by the server 120. The server 120 may receive accident indication messages from a plurality of vehicles within its coverage area. It should be understood that accident indication messages that are the same or close in location and time may relate to the same traffic accident, and therefore the traffic accident need not be identified for each accident indication message. For example, the server 120 may determine all accident indication messages of which location information is within an area having a radius of a predetermined distance as the same traffic accident through an aggregation operation.
As described above, the server 120 has a history repository that stores historical traffic incidents. The historical traffic accidents represent the real traffic accidents that have been identified. At 706, the server 120 compares the time and location of the aggregated traffic accident to a history base. If the time and the position are the same as or close to the time and the position of a certain traffic accident record in the historical library (for example, the time is different from the preset time and the position is different from the preset range), the traffic accident is considered to be reported and verified. In this case, the traffic accident can be directly ignored and no longer verified. Otherwise, the traffic accident is determined as the traffic accident to be identified.
The server identifies the traffic accident by verifying the image of the traffic accident. At 707, the server 120 determines a vehicle having a traffic accident image, e.g., an additional vehicle that is routed to the traffic accident scene during or after the time period of the traffic accident. The determined vehicle may or may not be the vehicle that sent the accident indication message to the server 120. That is, the determined vehicle is not limited to the vehicle reporting the traffic accident, and may be any vehicle that may have an image of the traffic accident, or even other devices that pass by from the scene of the accident, etc.
After the vehicle is determined, the server 120 issues an image capture instruction to the image recording device of the determined vehicle at 708. The image capture instructions indicate the time at which the server requested an image, such as video content related to a traffic accident.
At 709, vehicle 110 uploads the stored image or video frames for the time or time period based on the instruction.
Then, the server 120 determines a recognition result of the traffic accident based on the image at 710. As described above, the server 120 may recognize the received image through the trained recognition model 122 to detect whether the target object related to the traffic accident is included therein. Thus, the server 120 may generate a recognition result such as that the traffic accident is a real traffic accident, an accident type of the traffic accident, and the like.
At 711, the server 120 publishes the traffic accident information. The traffic accident information is the information reported by the vehicle and verified by the server 120, so the traffic accident information provided by the present disclosure has the advantages of high coverage rate, timeliness and reliability.
It should be noted that the vehicles 110 interacting with the server in process 700 are not necessarily the same vehicle. For example, the accident indication message may be sent to the server 120 by a first vehicle that reports the traffic accident capability, but the image acquisition request may be received and the image sent to the server by a second vehicle different from the first vehicle, which may be a vehicle recorder only, and does not necessarily have the capability of reporting the traffic accident.
Through the embodiments, the traffic accident recognition scheme disclosed by the invention can automatically determine whether the traffic accident reported by the user is real and effective, so that a traffic accident report can be accurately and timely issued, and the traffic environment is improved. Compared with the traditional traffic accident reporting and auditing mechanism, firstly, the method does not need a user to report the traffic accident manually, and automatically judges whether the traffic accident exists or not and reports the traffic accident by using the images acquired by the automobile data recorder of mass running vehicles in real time, so that the coverage range of traffic accident identification is enlarged. In addition, the reported traffic accident only needs to indicate the position and the time, and does not need to upload images, thereby saving bandwidth resources. Furthermore, after the reported traffic accident is received, the rest of operations can be realized through an automatic process, so that the high cost and high time delay of manual auditing are avoided, the possibility of missed auditing and misauditing is reduced, and the user experience is remarkably improved. More importantly, by collecting images of the traffic accident from a plurality of image recording devices passing through the traffic accident, a more comprehensive image set can be obtained, thereby improving the accuracy of the identification result.
Fig. 8 shows a block diagram of a traffic accident recognition device 800 according to an embodiment of the present disclosure. As shown in fig. 8, the traffic accident recognition apparatus 800 may include: a time and location determination module 802 configured to determine a time and a location of a traffic accident based on an accident indication message reported by a vehicle; an image recording device determination module 804 configured to determine an image recording device based on time and location; an image capture instruction sending module 806 configured to send an image capture instruction to the image recording device to cause the image recording device to return an image associated with a time; and a recognition result determination module 808 configured to determine a recognition result of the traffic accident based on the image.
In some embodiments, the temporal location determination module 802 may include: a time and position information acquisition module configured to acquire time information and position information from the event indication message; an aggregation module configured to aggregate the accident indication message and the other accident indication messages to determine a traffic accident based on the time information and the location information and corresponding time information and corresponding location information in the other accident indication messages; and a determination module configured to determine a time and a location of the traffic accident based on the time information and the location information.
In some embodiments, the traffic accident recognition device 800 may further include: a comparison module configured to compare the time and location with corresponding times and corresponding locations of historical traffic incidents; a determination module configured to determine the traffic accident as the traffic accident to be identified if at least one of the time and the location is different from a corresponding time and a corresponding location of the historical traffic accident.
In some embodiments, the image recording device may be an image recording device of an additional vehicle, and the image recording device determination module 804 may include: a time segment road segment determination module configured to determine a time segment associated with a time and a road segment associated with a location; and an image recording device determination module configured to determine an image recording device of an additional vehicle located at the road segment during the time period.
In some embodiments, the recognition result determination module 808 may include: a feature representation determination module configured to determine a feature representation of the image; and an application module configured to apply the feature representations to a traffic accident recognition model to determine a recognition result of the traffic accident, the traffic accident recognition model being trained by taking the reference feature representations as input and the corresponding annotated reference recognition results as output.
The present disclosure also provides an electronic device, a computer-readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 9 illustrates a block diagram of a computing device 900 in which one or more embodiments of the disclosure may be implemented. Computing device 900 is an example implementation of the arrangement shown in fig. 1 at vehicle 110 or server 120. It should be understood that the computing device 900 illustrated in FIG. 9 is merely exemplary and should not be construed as limiting in any way the functionality and scope of the embodiments described herein.
As shown in fig. 9, computing device 900 is in the form of a general purpose computing device. Components of computing device 900 may include, but are not limited to, one or more processors or processing units 910, memory 920, storage 930, one or more communication units 940, one or more input devices 950, and one or more output devices 960. The processing unit 910 may be a real or virtual processor and can perform various processes according to programs stored in the memory 920. In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of computing device 900.
Computing device 900 typically includes a number of computer storage media. Such media may be any available media that is accessible by computing device 800 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. The memory 920 may be volatile memory (e.g., registers, cache, Random Access Memory (RAM)), non-volatile memory (e.g., Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage 930 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a magnetic disk, or any other medium that may be capable of being used to store information and/or data (e.g., training data for training) and that may be accessed within computing device 900.
Computing device 900 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 9, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 920 may include a computer program product 925 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 940 enables communication with other computing devices over a communication medium. Additionally, the functionality of the components of computing device 900 may be implemented in a single computing cluster or multiple computing machines, which are capable of communicating over a communications connection. Thus, computing device 900 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 950 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 960 may be one or more output devices such as a display, speakers, printer, etc. Computing device 900 may also communicate with one or more external devices (not shown), such as storage devices, display devices, sensors, etc., communication devices with one or more devices that enable a user to interact with computing device 900, or communication devices (e.g., network cards, modems, etc.) that enable computing device 900 to communicate with one or more other computing devices, as desired, via communication unit 940. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium is provided, on which one or more computer instructions are stored, wherein the one or more computer instructions are executed by a processor to implement the above-described method.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the implementations disclosed herein.
The embodiment of the application discloses:
TS 1: a traffic accident identification method, comprising:
determining the time and the position of the traffic accident based on accident indication information reported by vehicles;
determining an image recording device based on the time and the location;
sending an image acquisition instruction to the image recording device to enable the image recording device to return an image associated with the time; and
determining a recognition result of the traffic accident based on the image.
Ts2. the method of TS1, wherein determining the time and the location comprises:
acquiring time information and position information from the accident indication message;
aggregating the accident indication message and the other accident indication messages to determine the traffic accident based on the time information and the location information and corresponding time information and corresponding location information in the other accident indication messages; and
determining the time and the location of the traffic accident based on the time information and the location information.
Ts3. the method of TS1, further comprising:
comparing the time and the location to corresponding times and corresponding locations of historical traffic incidents; and
determining the traffic accident as a traffic accident to be identified if at least one of the time and the location is different from the corresponding time and the corresponding location of the historical traffic accident.
Ts4. the method of TS1, wherein the image recording device includes at least an image recording device of an additional vehicle, the determining the image recording device based on the time and the location including:
determining a time period associated with the time and a road segment associated with the location; and
determining the image recording device of the additional vehicle located on the road segment during the time period.
Ts5. the method of TS1, wherein determining the identification of the traffic accident based on the image comprises:
identifying the image; and
and determining the traffic accident as a real traffic accident according to the fact that the target object associated with the traffic accident is determined to be included in the image.
Ts6. the method of TS5, wherein the target object includes at least one of a warning sign, a cone, a pothole, and a car body fragment.
Ts7. the method of TS1, wherein determining the identification of the traffic accident based on the image comprises:
determining a feature representation of the image; and
applying the feature representation to a traffic accident recognition model trained using a reference feature representation as an input and a corresponding labeled reference recognition result as an output to determine a recognition result of the traffic accident.
Ts8. a traffic accident recognition method, comprising:
detecting a target object in the traffic accident;
generating an accident indication message for reporting an accident indication message based on the time information and the location information associated with the target object; and
and sending the image according to the received request aiming at the image of the target object.
Ts9. a traffic accident recognition device, comprising:
the time and position determining module is configured to determine the time and the position of the traffic accident based on the accident indication message reported by the vehicle;
an image recording device determination module configured to determine an image recording device based on the time and the location;
an image acquisition instruction sending module configured to send an image acquisition instruction to the image recording device to cause the image recording device to return an image associated with the time; and
an identification result determination module configured to determine an identification result of the traffic accident based on the image.
Ts10. the traffic accident recognition device according to TS9, wherein the time and position determination module comprises:
a time and position information acquisition module configured to acquire time information and position information from the event indication message;
an aggregation module configured to aggregate the accident indication message and the other accident indication messages to determine a traffic accident based on the time information and the location information and corresponding time information and corresponding location information in the other accident indication messages; and
a determination module configured to determine a time and a location of the traffic accident based on the time information and the location information.
Ts11. the traffic accident recognition device according to TS9, further comprising:
a comparison module configured to compare the time and location with corresponding times and corresponding locations of historical traffic incidents; and
a determination module configured to determine the traffic accident as the traffic accident to be identified if at least one of the time and the location is different from a corresponding time and a corresponding location of the historical traffic accident.
Ts12. the traffic accident recognition device according to TS9, wherein the image recording device is an image recording device of an additional vehicle, the image recording device determination module comprising:
a time segment road segment determination module configured to determine a time segment associated with a time and a road segment associated with a location; and
an image recording device determination module configured to determine the image recording device of the additional vehicle located at the road segment within the time period.
Ts13. the method according to TS9, wherein the recognition result determination module comprises:
an image recognition module configured to recognize the image; and
and the real traffic accident determining module is used for determining that the traffic accident is a real traffic accident according to the fact that the target object associated with the traffic accident is determined to be included in the image.
The ts14. the method of TS13, wherein the target object includes at least one of a warning sign, a cone, a pothole, and a car body fragment.
Ts15. the traffic accident recognition device according to TS9, wherein the recognition result determination module includes:
a feature representation determination module configured to determine a feature representation of the image; and
an application module configured to apply the feature representations to a traffic accident recognition model trained by taking as input the reference feature representations and as output the corresponding annotated reference recognition results to determine recognition results of the traffic accident.
Ts16. an electronic device, comprising:
a memory and a processor;
wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions are to be executed by the processor to implement a method according to any one of TS 1-TS 8.
Ts17. a computer readable storage medium having stored thereon one or more computer instructions, wherein the one or more computer instructions are executed by a processor to implement a method according to any one of TS 1-TS 8.
Ts18. a computer program product comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement a method according to any one of TS1 to TS8.

Claims (10)

1. A traffic accident identification method, comprising:
determining the time and the position of the traffic accident based on accident indication information reported by vehicles;
determining an image recording device based on the time and the location;
sending an image acquisition instruction to the image recording device to enable the image recording device to return an image associated with the time; and
determining a recognition result of the traffic accident based on the image.
2. The method of claim 1, wherein determining the time and the location comprises:
acquiring time information and position information from the accident indication message;
aggregating the accident indication message and the other accident indication messages to determine the traffic accident based on the time information and the location information and corresponding time information and corresponding location information in the other accident indication messages; and
determining the time and the location of the traffic accident based on the time information and the location information.
3. The method of claim 1, further comprising:
comparing the time and the location to corresponding times and corresponding locations of historical traffic incidents;
determining the traffic accident as a traffic accident to be identified if at least one of the time and the location is different from the corresponding time and the corresponding location of the historical traffic accident.
4. The method of claim 1, wherein the image recording device comprises at least an image recording device of an additional vehicle, the determining the image recording device based on the time and the location comprising:
determining a time period associated with the time and a road segment associated with the location; and
determining the image recording device of the additional vehicle located on the road segment during the time period.
5. The method of claim 1, wherein determining the identification of the traffic accident based on the image comprises:
identifying the image; and
and determining the traffic accident as a real traffic accident according to the fact that the target object associated with the traffic accident is determined to be included in the image.
6. A traffic accident identification method, comprising:
detecting a target object in the traffic accident;
generating an accident indication message for reporting an accident indication message based on the time information and the location information associated with the target object; and
and sending the image according to the received request aiming at the image of the target object.
7. A traffic accident recognition apparatus, comprising:
the time and position determining module is configured to determine the time and the position of the traffic accident based on the accident indication message reported by the vehicle;
an image recording device determination module configured to determine an image recording device based on the time and the location;
an image acquisition instruction sending module configured to send an image acquisition instruction to the image recording device to cause the image recording device to return an image associated with the time; and
an identification result determination module configured to determine an identification result of the traffic accident based on the image.
8. An electronic device, comprising:
a memory and a processor;
wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions are to be executed by the processor to implement the method of any one of claims 1 to 6.
9. A computer readable storage medium having one or more computer instructions stored thereon, wherein the one or more computer instructions are executed by a processor to implement the method of any one of claims 1 to 7.
10. A computer program product comprising computer executable instructions, wherein the computer executable instructions, when executed by a processor, implement the method of any one of claims 1 to 6.
CN202110089174.XA 2021-01-22 2021-01-22 Traffic accident recognition method, device, electronic device and medium Pending CN112926575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110089174.XA CN112926575A (en) 2021-01-22 2021-01-22 Traffic accident recognition method, device, electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110089174.XA CN112926575A (en) 2021-01-22 2021-01-22 Traffic accident recognition method, device, electronic device and medium

Publications (1)

Publication Number Publication Date
CN112926575A true CN112926575A (en) 2021-06-08

Family

ID=76164829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110089174.XA Pending CN112926575A (en) 2021-01-22 2021-01-22 Traffic accident recognition method, device, electronic device and medium

Country Status (1)

Country Link
CN (1) CN112926575A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113450474A (en) * 2021-06-28 2021-09-28 通视(天津)信息技术有限公司 Driving video data processing method and device and electronic equipment
CN113645440A (en) * 2021-06-23 2021-11-12 东风汽车集团股份有限公司 Automobile network alarm method and system
CN113807220A (en) * 2021-09-06 2021-12-17 丰图科技(深圳)有限公司 Traffic event detection method and device, electronic equipment and readable storage medium
CN114301938A (en) * 2021-12-24 2022-04-08 阿波罗智联(北京)科技有限公司 Vehicle-road cooperative vehicle event determination method, related device and computer program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980855A (en) * 2017-04-01 2017-07-25 公安部交通管理科学研究所 Traffic sign quickly recognizes alignment system and method
CN108307315A (en) * 2016-09-07 2018-07-20 北京嘀嘀无限科技发展有限公司 A kind of processing method of traffic accident, server and mobile terminal
US20180365983A1 (en) * 2015-12-10 2018-12-20 Telefonaktiebolaget Lm Ericsson (Publ) Technique for collecting information related to traffic accidents
CN109389827A (en) * 2018-08-17 2019-02-26 深圳壹账通智能科技有限公司 The means of proof, device, equipment and storage medium based on automobile data recorder
US20200027333A1 (en) * 2018-07-17 2020-01-23 Denso International America, Inc. Automatic Traffic Incident Detection And Reporting System

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180365983A1 (en) * 2015-12-10 2018-12-20 Telefonaktiebolaget Lm Ericsson (Publ) Technique for collecting information related to traffic accidents
CN108307315A (en) * 2016-09-07 2018-07-20 北京嘀嘀无限科技发展有限公司 A kind of processing method of traffic accident, server and mobile terminal
CN106980855A (en) * 2017-04-01 2017-07-25 公安部交通管理科学研究所 Traffic sign quickly recognizes alignment system and method
US20200027333A1 (en) * 2018-07-17 2020-01-23 Denso International America, Inc. Automatic Traffic Incident Detection And Reporting System
CN109389827A (en) * 2018-08-17 2019-02-26 深圳壹账通智能科技有限公司 The means of proof, device, equipment and storage medium based on automobile data recorder

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113645440A (en) * 2021-06-23 2021-11-12 东风汽车集团股份有限公司 Automobile network alarm method and system
CN113450474A (en) * 2021-06-28 2021-09-28 通视(天津)信息技术有限公司 Driving video data processing method and device and electronic equipment
CN113807220A (en) * 2021-09-06 2021-12-17 丰图科技(深圳)有限公司 Traffic event detection method and device, electronic equipment and readable storage medium
CN114301938A (en) * 2021-12-24 2022-04-08 阿波罗智联(北京)科技有限公司 Vehicle-road cooperative vehicle event determination method, related device and computer program product
CN114301938B (en) * 2021-12-24 2024-01-02 阿波罗智联(北京)科技有限公司 Vehicle-road cooperative vehicle event determining method, related device and computer program product

Similar Documents

Publication Publication Date Title
CN112926575A (en) Traffic accident recognition method, device, electronic device and medium
US10317901B2 (en) Low-level sensor fusion
US9443153B1 (en) Automatic labeling and learning of driver yield intention
CN111739344B (en) Early warning method and device and electronic equipment
CN110753892A (en) Method and system for instant object tagging via cross-modality verification in autonomous vehicles
WO2018047114A2 (en) Situational awareness determination based on an annotated environmental model
JP2016095831A (en) Driving support system and center
CN107767661B (en) Real-time tracking system for vehicle
CN110753953A (en) Method and system for object-centric stereo vision in autonomous vehicles via cross-modality verification
CN112203216B (en) Positioning information acquisition method, driving assistance method and vehicle end sensor detection method
CN109284801B (en) Traffic indicator lamp state identification method and device, electronic equipment and storage medium
US11361555B2 (en) Road environment monitoring device, road environment monitoring system, and road environment monitoring program
CN108932849B (en) Method and device for recording low-speed running illegal behaviors of multiple motor vehicles
US11189162B2 (en) Information processing system, program, and information processing method
CN113771573A (en) Vehicle suspension control method and device based on road surface identification information
CN112766746A (en) Traffic accident recognition method and device, electronic equipment and storage medium
CN113220805B (en) Map generation device, recording medium, and map generation method
EP3859281B1 (en) Apparatus and method for collecting data for map generation
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
CN114264310A (en) Positioning and navigation method, device, electronic equipment and computer storage medium
CN113393011A (en) Method, apparatus, computer device and medium for predicting speed limit information
KR102559928B1 (en) Road Map Information Currentization Method Using Road Shooting Information, Management Server Used Therein, and Medium Being Recorded with Program for Executing the Method
WO2016072082A1 (en) Driving assistance system and center
EP4358039A1 (en) Lane-assignment for traffic objects on a road
US20230110089A1 (en) Information collection device, roadside device, and road condition obtaining method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination