CN111753634A - Traffic incident detection method and device - Google Patents

Traffic incident detection method and device Download PDF

Info

Publication number
CN111753634A
CN111753634A CN202010238631.2A CN202010238631A CN111753634A CN 111753634 A CN111753634 A CN 111753634A CN 202010238631 A CN202010238631 A CN 202010238631A CN 111753634 A CN111753634 A CN 111753634A
Authority
CN
China
Prior art keywords
lane
scene
road surface
road
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010238631.2A
Other languages
Chinese (zh)
Inventor
孔繁司
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Goldway Intelligent Transportation System Co Ltd
Original Assignee
Shanghai Goldway Intelligent Transportation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Goldway Intelligent Transportation System Co Ltd filed Critical Shanghai Goldway Intelligent Transportation System Co Ltd
Priority to CN202010238631.2A priority Critical patent/CN111753634A/en
Publication of CN111753634A publication Critical patent/CN111753634A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The application provides a traffic event detection method and equipment. In the application, even if the road surface scene in the monitored area changes, the corresponding road surface traffic scene information is determined based on the collected at least one frame of image, and at least one target event detection mechanism corresponding to the road surface traffic scene information is determined from all configured event detection mechanisms, so that the target event detection mechanism corresponding to the actual road surface change in the monitored area is self-adaptively and dynamically adjusted along with the actual road surface change in the monitored area, the event detection mechanism does not need to be manually reconfigured, manpower and material resources are effectively saved, the cost is saved, and the adjustment efficiency of the event detection mechanism is also improved.

Description

Traffic incident detection method and device
Technical Field
The present application relates to image technology, and more particularly, to a method and apparatus for detecting traffic events.
Background
In a road surface monitoring scene, video image monitoring devices such as cameras are often erected at different monitoring points to monitor traffic events (such as occupying a passing lane, backing a car, etc.) by the video image monitoring devices.
At present, a video image monitoring device erected at a monitoring point is configured with a traffic incident detection mechanism in advance according to an actual road scene (also called a monitoring area). For example, a corresponding traffic event detection mechanism is configured in the video image monitoring device in advance according to the lanes, the lane directions and the like in the monitored area. However, once the road scene in the monitored area changes due to a special situation, the traffic event detection mechanism configured in the video image monitoring apparatus may also change, and a traffic event detection mechanism corresponding to the change needs to be configured in the video image monitoring apparatus again. With the wide application of video image monitoring devices, especially on expressways, video image monitoring devices are often erected according to the requirement of erecting one video image monitoring device per kilometer, so that when the actual road surface scene in a monitored area changes, a large amount of manpower and material resources are consumed by reconfiguring the video image monitoring devices, and the efficiency is very low.
Disclosure of Invention
The application provides a traffic incident detection method and equipment, which are used for adaptively and dynamically adjusting a traffic incident detection mechanism based on a scene.
The technical scheme provided by the application comprises the following steps:
a traffic incident detection method is applied to an electronic device and comprises the following steps:
determining corresponding road traffic scene information according to the collected at least one frame of image;
determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event detection mechanisms;
and detecting the corresponding traffic event according to the target event detection mechanism.
As an embodiment, the determining the corresponding road traffic scene information according to the at least one collected frame of image includes:
inputting a currently acquired current image frame into a trained scene segmentation model to obtain a target road scene classification; the classification of the target road surface scenes comprises different types of road surface scenes, wherein the different types of road surface scenes at least comprise: zebra crossing, white lane line, yellow lane line, road direction sign, flow guiding belt, guardrail, greening and other specified scene information on the road surface;
and determining the road traffic scene information according to the target road scene classification.
As an embodiment, the determining the corresponding road traffic scene information according to the at least one collected frame of image includes:
sequentially inputting the acquired N frames of images to a trained scene segmentation model to obtain candidate pavement scene classifications corresponding to the N frames of images respectively, wherein N is greater than 1, and the N frames of images comprise a current image frame acquired currently and an N-1 frame image acquired previously;
determining a target road surface scene classification according to the candidate road surface scene classifications corresponding to the N frames of images respectively; the classification of the target road surface scenes comprises different types of road surface scenes, wherein the different types of road surface scenes at least comprise: zebra crossing, white lane marking, yellow lane marking, road direction sign, flow guiding belt, guardrail, greening and other designated scene information;
and determining the road traffic scene information according to the target road scene classification.
As an embodiment, the classification of candidate road surface scenes includes different classes of candidate road surface scenes, the different classes of candidate road surface scenes including at least: zebra crossing, white lane marking, yellow lane marking, road direction sign, flow guiding belt, guardrail, greening and other designated scene information;
the step of determining the classification of the target road surface scene according to the classification of the candidate road surface scenes corresponding to the N frames of images comprises the following steps:
selecting candidate road surface scenes belonging to the same category from the candidate road surface scene categories respectively corresponding to the N frames of images;
and generating a target road surface scene corresponding to the category according to the selected candidate road surface scenes belonging to the same category.
As an embodiment, the determining the road traffic scene information according to the target road scene classification includes:
determining scene information of a road surface scene satisfying a set condition from the target road surface scene classification, wherein the road surface scene satisfying the set condition is a road surface scene arranged in a planar structure on a road surface, and the road surface scene arranged in the planar structure on the road surface at least comprises the following components: a diversion area, a greening, a guardrail, a zebra crossing and a target road surface which is not provided with any traffic sign;
determining lane line information of different lane lines from the target road surface scene classification;
determining lane information according to lane line information of a lane line and the determined scene information of the target road surface adjacent to the lane line, and determining lane direction information of a lane corresponding to the lane information;
determining road direction signs from the target road surface scene classification, and determining corresponding road direction information according to the road direction signs;
and determining the road traffic scene information according to the scene information of the road scene meeting the set conditions, the lane line information, the lane direction information and the road direction information.
As an embodiment, the determining lane direction information of the lane corresponding to the lane information includes:
acquiring a motion track of a tracked target object on a lane corresponding to the lane information;
determining the lane type of the lane according to the target object and the action track of the target object;
when the lane type is not a type for indicating a non-motor lane, determining lane direction information of the lane according to the action track.
As an embodiment, the determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms includes:
when a target road surface exists in the road traffic scene information and no traffic mark is set on the target road surface, determining an event detection mechanism corresponding to the target road surface as the target event detection mechanism, wherein the event detection mechanism corresponding to the target road surface is at least used for detecting roadblocks, construction, smoke and fire; and/or the presence of a gas in the gas,
when a non-motor vehicle lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the non-motor vehicle lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the non-motor vehicle lane is at least used for detecting that the non-motor vehicle lane is occupied by a motor vehicle; and/or the presence of a gas in the gas,
when a lane line exists in the road traffic scene information and the lane line is used for indicating that line crossing is forbidden, determining an event detection mechanism corresponding to the lane line as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane line is at least used for detecting line pressing and lane changing; and/or the presence of a gas in the gas,
when a motor lane exists in the road traffic scene information, determining an event detection mechanism corresponding to the motor lane as the target event detection mechanism, wherein the event detection mechanism corresponding to the motor lane is at least used for detecting parking violations and pedestrians; and/or the presence of a gas in the gas,
when the road traffic scene information contains lane direction information, determining an event detection mechanism corresponding to the lane direction information as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane direction information is at least used for detecting reverse driving and backing; and/or the presence of a gas in the gas,
and when the passing lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the passing lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the passing lane is at least used for detecting that the passing lane is occupied.
As an embodiment, after detecting the corresponding traffic event according to the target event detection mechanism, the method further comprises:
and controlling a camera device to shoot the traffic incident.
A traffic incident detection device applied to an electronic device includes:
the scene determining unit is used for determining corresponding road traffic scene information according to the collected at least one frame of image;
the detection mechanism determining unit is used for determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms;
and the event detection unit is used for detecting the corresponding traffic event according to the target event detection mechanism.
As an embodiment, the determining the corresponding road traffic scene information according to the at least one collected frame of image by the scene determining unit may include: inputting a currently acquired current image frame into a trained scene segmentation model to obtain a target road scene classification; and determining the road traffic scene information according to the target road scene classification. The classification of the target road surface scenes comprises different types of road surface scenes, wherein the different types of road surface scenes at least comprise: zebra crossing, white lane marking, yellow lane marking, road direction sign, diversion strip, guardrail, greening, and other designated scene information.
As an embodiment, the determining unit determines the corresponding road traffic scene information according to at least one collected frame of image, including: sequentially inputting the acquired N frames of images to the trained scene segmentation model to obtain candidate road scene classifications corresponding to the N frames of images respectively, and determining a target road scene classification according to the candidate road scene classifications corresponding to the N frames of images respectively; and determining the road traffic scene information according to the target road scene classification. Optionally, N is greater than 1, and the N frame images include a currently acquired current image frame and a previously acquired N-1 frame image. The classification of the target road surface scenes comprises different types of road surface scenes, wherein the different types of road surface scenes at least comprise: zebra crossing, white lane marking, yellow lane marking, road direction sign, diversion strip, guardrail, greening, and other designated scene information.
In one example, the classification of candidate road surface scenes includes different classes of candidate road surface scenes including at least: zebra crossing, white lane marking, yellow lane marking, road direction sign, diversion strip, guardrail, greening, and other designated scene information.
The scene determining unit determines a target road scene classification according to the candidate road scene classifications corresponding to the N frames of images, and the method comprises the following steps: selecting candidate road surface scenes belonging to the same category from the candidate road surface scene categories respectively corresponding to the N frames of images; and generating a target road surface scene corresponding to the category according to the selected candidate road surface scenes belonging to the same category.
As an embodiment, the determining unit determines the road traffic scene information according to the target road scene classification, including: determining scene information of a road surface scene satisfying a set condition from the target road surface scene classification, wherein the road surface scene satisfying the set condition is a scene set as a planar structure on a road surface, and the scene set as the planar structure on the road surface at least comprises the following scenes: a diversion area, a greening, a guardrail, a zebra crossing and a target road surface which is not provided with any traffic sign; determining lane line information of different lane lines from the target road surface scene classification; determining lane information according to lane line information of a lane line and the determined scene information of the target road surface adjacent to the lane line, and determining lane direction information of a lane corresponding to the lane information; determining road direction signs from the target road surface scene classification, and determining corresponding road direction information according to the road direction signs; and determining the road traffic scene information according to the scene information of the road scene meeting the set conditions, the lane line information, the lane direction information and the road direction information.
As an embodiment, the determining, by the scene determining unit, lane direction information of a lane corresponding to the lane information includes: acquiring a motion track of a tracked target object on a lane corresponding to the lane information; determining the lane type of the lane according to the target object and the action track of the target object; when the lane type is not a type for indicating a non-motor lane, determining lane direction information of the lane according to the action track.
As an embodiment, the detection mechanism determining unit determines at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms, including:
when a target road surface exists in the road traffic scene information and no traffic mark is set on the target road surface, determining an event detection mechanism corresponding to the target road surface as the target event detection mechanism, wherein the event detection mechanism corresponding to the target road surface is at least used for detecting roadblocks, construction, smoke and fire; and/or the presence of a gas in the gas,
when a non-motor vehicle lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the non-motor vehicle lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the non-motor vehicle lane is at least used for detecting that the non-motor vehicle lane is occupied by a motor vehicle; and/or the presence of a gas in the gas,
when a lane line exists in the road traffic scene information and the lane line is used for indicating that line crossing is forbidden, determining an event detection mechanism corresponding to the lane line as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane line is at least used for detecting line pressing and lane changing; and/or the presence of a gas in the gas,
when a motor lane exists in the road traffic scene information, determining an event detection mechanism corresponding to the motor lane as the target event detection mechanism, wherein the event detection mechanism corresponding to the motor lane is at least used for detecting parking violations and pedestrians; and/or the presence of a gas in the gas,
when the road traffic scene information contains lane direction information, determining an event detection mechanism corresponding to the lane direction information as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane direction information is at least used for detecting reverse driving and backing; and/or the presence of a gas in the gas,
and when the passing lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the passing lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the passing lane is at least used for detecting that the passing lane is occupied.
As an embodiment, the event detection unit further controls a camera device to capture the traffic event after detecting the corresponding traffic event according to the target event detection mechanism.
An electronic device, comprising: a processor and a machine-readable storage medium;
the machine-readable storage medium stores machine-executable instructions executable by the processor;
the processor is configured to execute machine-executable instructions to implement the method steps disclosed above.
According to the technical scheme, the corresponding road traffic scene information is determined according to the collected at least one frame of image, and the at least one target event detection mechanism corresponding to the road traffic scene information is determined from all configured event detection mechanisms, so that the corresponding traffic event is detected based on the target event detection mechanism, and the corresponding target event detection mechanism is adaptively and dynamically adapted based on the road traffic scene information.
Furthermore, even if the road surface scene in the monitored area changes, the corresponding road surface traffic scene information is determined according to the acquired at least one frame of image, and at least one target event detection mechanism corresponding to the road surface traffic scene information is determined from all configured event detection mechanisms, so that the corresponding target event detection mechanism is self-adaptively and dynamically adjusted along with the actual road surface change in the monitored area, the event detection mechanism does not need to be manually reconfigured, manpower and material resources are effectively saved, the cost is saved, and the adjustment efficiency of the event detection mechanism is also improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart of a method provided by an embodiment of the present application;
FIG. 2 is a flowchart of an implementation of step 101 provided in an embodiment of the present application;
fig. 3a is a schematic view of a road surface scene provided in the embodiment of the present application;
fig. 3b is a schematic view of another road surface scene provided in the embodiment of the present application;
FIG. 4 is a flowchart of another implementation of step 101 provided in an embodiment of the present application;
FIG. 5 is a flowchart of an implementation of step 402 provided by an embodiment of the present application;
FIG. 6 is a flowchart of an implementation of step 202 or step 403 provided by an embodiment of the present application;
FIG. 7a is a schematic view of a zebra crossing provided by an embodiment of the present application;
FIG. 7b is a schematic view of a vehicle lane provided by an embodiment of the present application;
FIG. 8 is a flowchart of an implementation of step 603 provided by an embodiment of the present application;
FIG. 9 is a schematic view of a road sign provided in an embodiment of the present application;
FIG. 10 is a block diagram of an apparatus according to an embodiment of the present disclosure;
fig. 11 is a hardware configuration diagram of a device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to make the technical solutions provided in the embodiments of the present application better understood and make the above objects, features and advantages of the embodiments of the present application more comprehensible, the technical solutions in the embodiments of the present application are described in further detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart of a method provided in an embodiment of the present application. As an example, the method illustrated in fig. 1 may be applied to an electronic device. In one example, the electronic device herein may be a camera. In another example, the electronic device here may also be a newly-arranged device connected with a camera, and the embodiment of the present application is not particularly limited.
As shown in fig. 1, the process may include the following steps:
step 101, determining corresponding road traffic scene information according to at least one collected frame of image.
As an embodiment, the at least one frame of image may be a monitoring image photographed for a monitored road. It should be noted that the monitored road may be a multi-lane bidirectional road (e.g., an expressway), or may be a multi-lane unidirectional road, a single-lane bidirectional road, and the like, and the embodiment is not particularly limited.
As an embodiment, by this step 101, it may be implemented to determine the actual road traffic information (i.e. the above-mentioned road traffic scene information) of the monitored road according to the at least one collected image. As to how to determine the corresponding road traffic scene information according to the at least one collected frame of image in step 101, there may be many implementation manners, and two implementation manners will be described below by way of example, which will not be described herein again.
And 102, determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event detection mechanisms.
As an embodiment, the present embodiment may pre-configure various event detection mechanisms that may be used on the electronic device. It should be noted that, the electronic device configures various event detection mechanisms in advance, which does not mean that the electronic device starts all the event detection mechanisms configured in advance during subsequent event detection, but determines at least one target event detection mechanism corresponding to the road traffic scene information according to the road traffic scene information during subsequent event detection as described in step 102. As to how to determine at least one target event detection mechanism corresponding to the road traffic scene information from all configured event detection mechanisms in step 102, the following description is provided by a specific embodiment, and details are not repeated here.
And 103, detecting a corresponding traffic event according to the target event detection mechanism.
When at least one target event detection mechanism corresponding to the road traffic scene information is determined from all configured event detection mechanisms through step 102, the target event detection mechanism may be subsequently started (i.e., a corresponding traffic event is detected according to the target event detection mechanism). The method realizes the self-adaptive adaptation of the corresponding target event detection mechanism based on the road traffic scene information, and detects the corresponding traffic event based on the target event detection mechanism.
Thus, the flow shown in fig. 1 is completed.
As can be seen from the flow shown in fig. 1, in the embodiment, the corresponding road traffic scene information is determined according to the acquired at least one frame of image, and at least one target event detection mechanism corresponding to the road traffic scene information is determined from all configured event detection mechanisms, so that the corresponding target event detection mechanism is adaptively and dynamically adapted based on the road traffic scene information. The corresponding traffic events may then be detected based on the target event detection mechanism.
Furthermore, even if the road surface scene in the monitored area changes, the corresponding road surface traffic scene information is determined according to the acquired at least one frame of image, and at least one target event detection mechanism corresponding to the road surface traffic scene information is determined from all configured event detection mechanisms, so that the corresponding target event detection mechanism is self-adaptively and dynamically adjusted along with the actual road surface change in the monitored area, the event detection mechanism does not need to be manually reconfigured, manpower and material resources are effectively saved, the cost is saved, and the adjustment efficiency of the event detection mechanism is also improved.
How to determine the corresponding road traffic scene information according to the at least one collected frame of image in step 101 is described as follows:
as an embodiment, in step 101, determining corresponding road traffic scene information according to at least one collected frame of image may be implemented based on deep learning. Optionally, in this embodiment, a scene segmentation model may be trained based on a deep learning technique. The following describes how to train the scene segmentation model by way of example, and details are not repeated here.
Based on the scene segmentation model, as an embodiment, the determining the corresponding road traffic scene information according to the at least one collected frame image in step 101 may include the process shown in fig. 2:
referring to fig. 2, fig. 2 is a flowchart of a step 101 implemented by an embodiment of the present application. As shown in fig. 2, the process may include the following steps:
step 201, inputting a currently acquired current image frame to a trained scene segmentation model to obtain a target road scene classification.
For one embodiment, the target road scene classification may include different classes of road scenes. Optionally, the different categories of road surface scenes at least include: zebra crossing, white lane marking, yellow lane marking, road direction sign, diversion strip, guardrail, greening, and other designated scene information (such as background). Fig. 3a and 3b show road surface scenes of some categories, respectively, as an example.
Step 202, determining the road traffic scene information according to the target road scene classification.
After the target road surface scene classification is obtained in step 201, in this step 202, the road traffic scene information can be determined according to the target road surface scene classification. As to how to determine the road traffic scene information according to the classification of the target road scene in step 202, there are many implementation forms in specific implementation, and fig. 6 below illustrates one embodiment, which is not described herein again.
Thus, the flow shown in fig. 2 is completed.
Through the process shown in fig. 2, the determination of the corresponding road traffic scene information based on the currently acquired current image frame by means of the scene segmentation model is realized.
As another embodiment, the determining the corresponding road traffic scene information according to the at least one collected image in step 101 may include the process shown in fig. 4:
referring to fig. 4, fig. 4 is a flowchart of another implementation of step 101 provided in the embodiments of the present application. As shown in fig. 4, the process may include the following steps:
step 401, sequentially inputting the acquired N frames of images to the trained scene segmentation model to obtain candidate road scene classifications corresponding to the N frames of images respectively.
Optionally, N is greater than 1.
As one example, the N frame images may include a current image frame currently acquired and N-1 frame images previously acquired. Alternatively, the N-1 frame image acquired before may be the N-1 frame image with the acquisition time closest to the current time in all the image frames acquired before.
As described in step 401, when the acquired N frames of images are sequentially input into the trained scene segmentation model, candidate road scene classifications corresponding to the N frames of images are obtained. For one embodiment, the road scene candidate class corresponding to each frame of image is similar to the above target road scene class, which may include different classes of road scene candidates. Optionally, the different classes of candidate road surface scenes at least include: zebra crossing, white lane marking, yellow lane marking, road direction sign, diversion strip, guardrail, greening, and other designated scene information. This can be seen in the partial road surface scenario shown in fig. 3a, 3 b.
Step 402, determining a target road surface scene classification according to the candidate road surface scene classifications corresponding to the N frames of images respectively.
The classification of the target road surface scene is similar to the above step 201, and is not described again.
As to how to determine the target road surface scene classification in step 402 according to the candidate road surface scene classifications corresponding to the N frames of images, as an embodiment, the process may include the process shown in fig. 5, which is not repeated herein.
And 403, determining the road traffic scene information according to the target road scene classification.
Similar to the step 202, in this step 403, the road traffic scene information is determined according to the target road scene classification, and there are many implementation forms in specific implementation, and fig. 6 below illustrates one of the implementation forms, which is not described herein again.
The flow shown in fig. 4 is completed.
Through the process shown in fig. 4, the determination of the corresponding road traffic scene information based on the N frames of images and by using the scene segmentation model can be realized.
How to determine the classification of the target road surface scene according to the classification of the candidate road surface scenes respectively corresponding to the N frames of images in the above step 402 is described as follows:
referring to fig. 5, fig. 5 is a flowchart of a step 402 implemented by an embodiment of the present application. As shown in fig. 5, the process may include the following steps:
step 501, selecting candidate road surface scenes belonging to the same category from the candidate road surface scene categories respectively corresponding to the N frames of images.
For example, a zebra crossing is selected from the candidate road scene classification corresponding to the first frame image, a zebra crossing is selected from the candidate road scene classification corresponding to the second frame image, and so on, N zebra crossings may be obtained (of course, if there is no zebra crossing in the candidate road scene classification corresponding to some images, M zebra crossings are finally obtained, and M is smaller than N).
And 502, generating a target road surface scene corresponding to the category according to the selected candidate road surface scenes belonging to the same category.
It should be noted that, in this embodiment, each candidate road surface scene is illustrated by a picture format such as JPEG. Based on this, as an embodiment, the step c2 of generating the target road surface scene corresponding to the selected candidate road surface scene belonging to the same category according to the selected candidate road surface scene may include: and performing set algorithm processing on pixels at the same pixel point position in the candidate road surface scene belonging to the same category, and finally obtaining a target road surface scene corresponding to the category. The target road scene is also illustrated here by a picture format such as JPEG.
Still taking the above N zebra stripes as an example, optionally, the N zebra stripes are substantially N zebra stripe pictures. Based on this, as an embodiment, in step c2, the pixels at the same pixel point position on the N zebra crossings may be processed by the setting algorithm to obtain a target zebra crossing.
It should be noted that the setting algorithm process may include: the pixels are integrated, for example, the intensity of the pixels is superimposed or weighted, and the embodiment is not particularly limited.
The flow shown in fig. 5 is completed.
How to determine the target road surface scene classification according to the candidate road surface scene classifications corresponding to the N frames of images in the step 402 is realized through the flow shown in fig. 5.
How to determine the road traffic scene information according to the target road scene classification in the above step 202 or step 403 is described as follows:
referring to fig. 6, fig. 6 is a flowchart illustrating implementation of step 202 or step 403 according to an embodiment of the present application. As shown in fig. 6, the process may include the following steps:
step 601, determining scene information of the road surface scene meeting the set conditions from the target road surface scene classification.
Alternatively, the road surface scene satisfying the setting condition refers to a scene set as a planar structure on a road surface. As an embodiment, the scene arranged as a planar structure on the road surface at least includes: diversion strips, greenery, guardrails, zebra crossings, roads that are not provided with any traffic markings (for ease of description, may be referred to as target roads). It should be noted that, since the zebra crossing is composed of white parallel thick solid lines, it is also considered as a planar structure. Fig. 7a illustrates the structure of a zebra crossing.
It should be noted that, as described above, the road surface scene is indicated by a picture format such as JPEG, and based on this, as an embodiment, in step 601, the scene information of the road surface scene that satisfies the setting condition may include: and carrying out pixel integration on the road surface scene meeting the set conditions to obtain polygonal information. The polygon information includes: position information of each point on the polygon and road surface scene information corresponding to the polygon.
Step 602, determining lane line information of different lane lines from the target road surface scene classification.
As an example, in step 602, different lane lines (e.g., white solid lines, dotted lines, single yellow lines, double yellow lines, lane lines of parking areas, etc.) may be determined according to the type of each lane line (the type may be a color, such as white, yellow, etc.) and the shape of the lane line (such as solid lines, dotted lines, single yellow lines, double yellow lines, etc.) in the target road scene classification.
Based on the determined lane lines, the lane line information of the different lane lines may include: shape and position of the lane line. Alternatively, the shape of the lane lines may be illustrated by a cubic spline curve form.
Step 603, determining lane information according to the lane line information of the lane line and the determined scene information of the target road surface adjacent to the lane line, and determining lane direction information of a lane corresponding to the lane information.
As an example, this step 603 may determine scene information (such as the above-mentioned road polygon information) of a target road surface adjacent to a position according to the lane line information of the lane line, such as the position, and then determine lane information according to the lane line information of the lane line and the scene information of the adjacent target road surface. The lane information here may be: and (4) motor vehicle lane information on the urban road. Fig. 7b shows by way of example the motor vehicle lane information.
As to how to determine the lane direction information of the lane corresponding to the lane information in step 603, the flow shown in fig. 8 will describe one embodiment thereof by way of example, which is not repeated herein.
Step 604, determining road direction signs from the target road surface scene classification, and determining corresponding road direction information according to the road direction signs.
The road direction indicator is used to indicate a road direction, and fig. 9 illustrates several kinds of road direction indicators. Since the road direction indicator is used to indicate the direction of the road, when the road direction indicator is determined in step 604, the corresponding road direction information can be easily determined according to the road direction indicator.
Step 605, determining road traffic scene information according to the scene information of the road scene which is determined in step 601 and meets the setting conditions, the lane line information determined in step 602, the lane information and the lane direction information determined in step 603, and the road direction information determined in step 604.
As an example, a road traffic scene structure may be compiled according to the scene information of the road scene determined in step 601 and satisfying the setting condition, the lane line information determined in step 602, the lane information and lane direction information determined in step 603, and the road direction information determined in step 604, where the compiled road traffic scene structure is the above-mentioned road traffic scene information. The scene information of the road surface scene satisfying the setting condition determined in step 601, the lane line information determined in step 602, the lane information and the lane direction information determined in step 603, and the road direction information determined in step 604 may be sub-structures of the above-described road surface traffic scene structure.
The flow shown in fig. 6 is completed.
Through the flow shown in fig. 6, the determination of the road traffic scene information according to the target road scene classification in the above steps 202 and 403 is implemented.
The following describes the lane direction information of the lane corresponding to the lane information determined in step 603:
referring to fig. 8, fig. 8 is a flowchart for implementing step 603 provided in the embodiment of the present application. As shown in fig. 8, the process may include:
step 801, acquiring a motion track of the tracked target object on a lane corresponding to the lane information.
Optionally, the action track of the target object on the lane corresponding to the lane information may be acquired by using a detection and tracking technology, which is similar to existing detection and tracking, and this embodiment is not particularly limited.
Step 802, determining the lane type of the lane according to the target object and the action track of the target object.
Optionally, here the lane type may be: non-motorized lane, passing lane, etc.
As an embodiment, when the target object is a pedestrian and the action track of the target object indicates that the target object walks on the lane, the lane type is determined to be a non-motor lane.
When the target object is a vehicle and the action track of the target object indicates that the target object is driving on the lane, step 802 needs to further detect whether a reference object such as a pedestrian or a non-motor vehicle exists in the previously monitored image around the lane, and if so, determine that the lane type of the lane is a motor lane on a non-expressway, otherwise, determine that the lane type of the lane is a lane on an expressway, such as a passing lane, a fast lane, etc.
And 803, when the lane type is not a type for indicating a non-motor lane, determining lane direction information of the lane according to the action track.
For example, if the target object is a vehicle and the movement trajectory indicates that the vehicle is traveling from south to north, the lane direction information of the lane is determined to be from south to north.
So far, how to determine the lane direction information of the lane corresponding to the lane information in step 603 can be realized through the flowchart shown in fig. 8.
The following describes determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms in step 102:
as an embodiment, determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms may include:
when a target road surface exists in the road traffic scene information and no traffic mark is set on the target road surface, determining an event detection mechanism corresponding to the target road surface as the target event detection mechanism, wherein the event detection mechanism corresponding to the target road surface is at least used for detecting roadblocks, construction, smoke and fire; and/or the presence of a gas in the gas,
when a non-motor vehicle lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the non-motor vehicle lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the non-motor vehicle lane is at least used for detecting that the non-motor vehicle lane is occupied by a motor vehicle; and/or the presence of a gas in the gas,
when a lane line exists in the road traffic scene information and the lane line is used for indicating that line crossing is forbidden, determining an event detection mechanism corresponding to the lane line as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane line is at least used for detecting line pressing and lane changing; and/or the presence of a gas in the gas,
when a motor lane exists in the road traffic scene information, determining an event detection mechanism corresponding to the motor lane as the target event detection mechanism, wherein the event detection mechanism corresponding to the motor lane is at least used for detecting parking violations and pedestrians; and/or the presence of a gas in the gas,
when the road traffic scene information contains lane direction information, determining an event detection mechanism corresponding to the lane direction information as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane direction information is at least used for detecting reverse driving and backing; and/or the presence of a gas in the gas,
and when the passing lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the passing lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the passing lane is at least used for detecting that the passing lane is occupied.
For ease of understanding, table 1 illustrates how to determine at least one target event detection mechanism corresponding to the road traffic scenario information from all configured event mechanisms:
road traffic scene information Target event detection mechanism
Target pavement For detecting roadblocks, constructions and smokeFog and fire event
Non-motor vehicle lane For detecting occupation of a non-motor vehicle lane by a motor vehicle
Forbidden line such as white solid line, double yellow lines and the like For detecting line pressing and lane changing
Motor vehicle lane For detecting violating parking, pedestrian
Direction of lane For detecting reverse running and backing
Overtaking lane For detecting occupancy of a passing lane
TABLE 1
It should be noted that, the above is only an implementation manner of determining at least one target event detection mechanism corresponding to the road traffic scene information, and other manners may also be used to determine at least one target event detection mechanism corresponding to the road traffic scene information.
When the target event detection mechanism is determined, the corresponding traffic event can be detected according to the target event detection mechanism, and when the corresponding traffic event is detected, the camera device can be further controlled to shoot the traffic event, wherein the shot image comprises vehicle information such as license plate numbers. The traffic event is therefore snapshotted for the purpose of being left as evidence of a violation.
The above-mentioned scene segmentation model is described below:
first, a scene picture sample is collected and recorded to a scene picture sample set. Here, in order to ensure that the trained scene segmentation model is more accurate, the scene picture sample set may include a number of scene picture samples, for example, more than 20 ten thousand scene picture samples. It should be noted that, the scene picture sample set does not only include the scene picture samples monitored by a single monitoring device in the same time period, but may include the scene picture samples monitored by different monitoring devices in different erection angles in different time periods. The different monitoring devices may here be individual monitoring devices applied in different scenarios, such as urban roads, tunnels, high speeds, etc.
And secondly, labeling the scene picture samples in the scene picture sample set. Optionally, the scene picture samples may be labeled based on photoshop or other labeling tools to label the scenes of the various categories in the scene picture samples. In one example, scenes are divided into 9 categories, which are: the road surface (marked as target road surface) without any traffic sign, zebra crossing, white lane line, yellow lane line, road direction sign, flow guiding belt, greening, guardrail and other specified scene information such as background part. Fig. 3a, 3b illustrate several types of scenarios among others.
And finally, training the scene segmentation model by using the marked scene picture sample in the established cafe environment. Finally, a scene segmentation model as described above is trained. After the scene segmentation model is trained, the road scene can be classified by the scene segmentation model, which is specifically shown in the flow shown in fig. 2 or fig. 4.
The method provided by the present application is described above, and the device provided by the present application is described below:
referring to fig. 10, fig. 10 is a diagram illustrating the structure of the apparatus according to the present invention. The apparatus is applied to an electronic device, and as shown in fig. 10, the apparatus may include: the device comprises a scene determining unit, a detection mechanism determining unit and an event detecting unit.
Optionally, the scene determining unit is configured to determine corresponding road traffic scene information according to the at least one collected frame of image.
And the detection mechanism determining unit is used for determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms.
And the event detection unit is used for detecting the corresponding traffic event according to the target event detection mechanism.
As an embodiment, the determining the corresponding road traffic scene information according to the at least one collected frame of image by the scene determining unit may include: inputting a currently acquired current image frame into a trained scene segmentation model to obtain a target road scene classification; and determining the road traffic scene information according to the target road scene classification. The classification of the target road surface scenes comprises different types of road surface scenes, wherein the different types of road surface scenes at least comprise: zebra crossing, white lane marking, yellow lane marking, road direction sign, diversion strip, guardrail, greening, and other designated scene information.
As an embodiment, the determining unit determines the corresponding road traffic scene information according to at least one collected frame of image, including: sequentially inputting the acquired N frames of images to the trained scene segmentation model to obtain candidate road scene classifications corresponding to the N frames of images respectively, and determining a target road scene classification according to the candidate road scene classifications corresponding to the N frames of images respectively; and determining the road traffic scene information according to the target road scene classification. Optionally, N is greater than 1, and the N frame images include a currently acquired current image frame and a previously acquired N-1 frame image. The classification of the target road surface scenes comprises different types of road surface scenes, wherein the different types of road surface scenes at least comprise: zebra crossing, white lane marking, yellow lane marking, road direction sign, diversion strip, guardrail, greening, and other designated scene information.
In one example, the classification of candidate road surface scenes includes different classes of candidate road surface scenes including at least: zebra crossing, white lane marking, yellow lane marking, road direction sign, flow guiding belt, guardrail, greening and other designated scene information;
based on this, the scene determination unit determines the target road surface scene classification according to the candidate road surface scene classification corresponding to the N frames of images, including: selecting candidate road surface scenes belonging to the same category from the candidate road surface scene categories respectively corresponding to the N frames of images; and generating a target road surface scene corresponding to the category according to the selected candidate road surface scenes belonging to the same category.
As an embodiment, the determining unit determines the road traffic scene information according to the target road scene classification, including: determining scene information of a road surface scene satisfying a set condition from the target road surface scene classification, wherein the road surface scene satisfying the set condition is a scene set as a planar structure on a road surface, and the scene set as the planar structure on the road surface at least comprises the following scenes: a diversion area, a greening, a guardrail, a zebra crossing and a target road surface which is not provided with any traffic sign; determining lane line information of different lane lines from the target road surface scene classification; determining lane information according to lane line information of a lane line and the determined scene information of the target road surface adjacent to the lane line, and determining lane direction information of a lane corresponding to the lane information; determining road direction signs from the target road surface scene classification, and determining corresponding road direction information according to the road direction signs; and determining the road traffic scene information according to the scene information of the road scene meeting the set conditions, the lane line information, the lane direction information and the road direction information.
As an embodiment, the determining, by the scene determining unit, lane direction information of a lane corresponding to the lane information includes: acquiring a motion track of a tracked target object on a lane corresponding to the lane information; determining the lane type of the lane according to the target object and the action track of the target object; when the lane type is not a type for indicating a non-motor lane, determining lane direction information of the lane according to the action track.
As an embodiment, the detection mechanism determining unit determines at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms, including:
when a target road surface exists in the road traffic scene information and no traffic mark is set on the target road surface, determining an event detection mechanism corresponding to the target road surface as the target event detection mechanism, wherein the event detection mechanism corresponding to the target road surface is at least used for detecting roadblocks, construction, smoke and fire; and/or the presence of a gas in the gas,
when a non-motor vehicle lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the non-motor vehicle lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the non-motor vehicle lane is at least used for detecting that the non-motor vehicle lane is occupied by a motor vehicle; and/or the presence of a gas in the gas,
when a lane line exists in the road traffic scene information and the lane line is used for indicating that line crossing is forbidden, determining an event detection mechanism corresponding to the lane line as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane line is at least used for detecting line pressing and lane changing; and/or the presence of a gas in the gas,
when a motor lane exists in the road traffic scene information, determining an event detection mechanism corresponding to the motor lane as the target event detection mechanism, wherein the event detection mechanism corresponding to the motor lane is at least used for detecting parking violations and pedestrians; and/or the presence of a gas in the gas,
when the road traffic scene information contains lane direction information, determining an event detection mechanism corresponding to the lane direction information as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane direction information is at least used for detecting reverse driving and backing; and/or the presence of a gas in the gas,
and when the passing lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the passing lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the passing lane is at least used for detecting that the passing lane is occupied.
As an embodiment, the event detection unit further controls a camera device to capture the traffic event after detecting the corresponding traffic event according to the target event detection mechanism.
Thus, the apparatus structure diagram provided in the present application is completed.
Correspondingly, the application also provides a hardware structure of the device shown in fig. 10. Referring to fig. 11, the hardware structure may include: a processor and a machine-readable storage medium having stored thereon machine-executable instructions executable by the processor; the processor is configured to execute machine-executable instructions to implement the methods disclosed in the above examples of the present application.
Based on the same application concept as the method, embodiments of the present application further provide a machine-readable storage medium, where several computer instructions are stored, and when the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
The machine-readable storage medium may be, for example, any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Furthermore, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A traffic incident detection method is applied to an electronic device and comprises the following steps:
determining corresponding road traffic scene information according to the collected at least one frame of image;
determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event detection mechanisms;
and detecting the corresponding traffic event according to the target event detection mechanism.
2. The method of claim 1, wherein determining the corresponding road traffic scene information according to the at least one collected image frame comprises:
inputting a currently acquired current image frame into a trained scene segmentation model to obtain a target road scene classification; the classification of the target road surface scenes comprises different types of road surface scenes, wherein the different types of road surface scenes at least comprise: zebra crossing, white lane line, yellow lane line, road direction sign, flow guiding belt, guardrail, greening and other specified scene information on the road surface;
and determining the road traffic scene information according to the target road scene classification.
3. The method of claim 1, wherein determining the corresponding road traffic scene information according to the at least one collected image frame comprises:
sequentially inputting the acquired N frames of images to a trained scene segmentation model to obtain candidate pavement scene classifications corresponding to the N frames of images respectively, wherein N is greater than 1, and the N frames of images comprise a current image frame acquired currently and an N-1 frame image acquired previously;
determining a target road surface scene classification according to the candidate road surface scene classifications corresponding to the N frames of images respectively; the classification of the target road surface scenes comprises different types of road surface scenes, wherein the different types of road surface scenes at least comprise: zebra crossing, white lane marking, yellow lane marking, road direction sign, flow guiding belt, guardrail, greening and other designated scene information;
and determining the road traffic scene information according to the target road scene classification.
4. The method of claim 3, wherein the classification of candidate road surface scenes comprises different classes of candidate road surface scenes including at least: zebra crossing, white lane marking, yellow lane marking, road direction sign, flow guiding belt, guardrail, greening and other designated scene information;
the step of determining the classification of the target road surface scene according to the classification of the candidate road surface scenes corresponding to the N frames of images comprises the following steps:
selecting candidate road surface scenes belonging to the same category from the candidate road surface scene categories respectively corresponding to the N frames of images;
and generating a target road surface scene corresponding to the category according to the selected candidate road surface scenes belonging to the same category.
5. The method according to claim 2 or 3, wherein said determining said road traffic scene information in dependence of said target road scene classification comprises:
determining scene information of a road surface scene satisfying a set condition from the target road surface scene classification, wherein the road surface scene satisfying the set condition is a road surface scene arranged in a planar structure on a road surface, and the road surface scene arranged in the planar structure on the road surface at least comprises the following components: a diversion area, a greening, a guardrail, a zebra crossing and a target road surface which is not provided with any traffic sign;
determining lane line information of different lane lines from the target road surface scene classification;
determining lane information according to lane line information of a lane line and the determined scene information of the target road surface adjacent to the lane line, and determining lane direction information of a lane corresponding to the lane information;
determining road direction signs from the target road surface scene classification, and determining corresponding road direction information according to the road direction signs;
and determining the road traffic scene information according to the scene information of the road scene meeting the set conditions, the lane line information, the lane direction information and the road direction information.
6. The method of claim 5, wherein the determining lane direction information of the lane corresponding to the lane information comprises:
acquiring a motion track of a tracked target object on a lane corresponding to the lane information;
determining the lane type of the lane according to the target object and the action track of the target object;
when the lane type is not a type for indicating a non-motor lane, determining lane direction information of the lane according to the action track.
7. The method of claim 1, wherein the determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms comprises:
when a target road surface exists in the road traffic scene information and no traffic mark is set on the target road surface, determining an event detection mechanism corresponding to the target road surface as the target event detection mechanism, wherein the event detection mechanism corresponding to the target road surface is at least used for detecting roadblocks, construction, smoke and fire; and/or the presence of a gas in the gas,
when a non-motor vehicle lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the non-motor vehicle lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the non-motor vehicle lane is at least used for detecting that the non-motor vehicle lane is occupied by a motor vehicle; and/or the presence of a gas in the gas,
when a lane line exists in the road traffic scene information and the lane line is used for indicating that line crossing is forbidden, determining an event detection mechanism corresponding to the lane line as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane line is at least used for detecting line pressing and lane changing; and/or the presence of a gas in the gas,
when a motor lane exists in the road traffic scene information, determining an event detection mechanism corresponding to the motor lane as the target event detection mechanism, wherein the event detection mechanism corresponding to the motor lane is at least used for detecting parking violations and pedestrians; and/or the presence of a gas in the gas,
when the road traffic scene information contains lane direction information, determining an event detection mechanism corresponding to the lane direction information as the target event detection mechanism, wherein the event detection mechanism corresponding to the lane direction information is at least used for detecting reverse driving and backing; and/or the presence of a gas in the gas,
and when the passing lane exists in the road traffic scene information, determining that an event detection mechanism corresponding to the passing lane is the target event detection mechanism, wherein the event detection mechanism corresponding to the passing lane is at least used for detecting that the passing lane is occupied.
8. The method of claim 1, wherein upon detecting a corresponding traffic event according to the target event detection mechanism, the method further comprises:
and controlling a camera device to shoot the traffic incident.
9. A traffic incident detection device applied to an electronic device, comprising:
the scene determining unit is used for determining corresponding road traffic scene information according to the collected at least one frame of image;
the detection mechanism determining unit is used for determining at least one target event detection mechanism corresponding to the road traffic scene information from all configured event mechanisms;
and the event detection unit is used for detecting the corresponding traffic event according to the target event detection mechanism.
10. An electronic device, comprising: a processor and a machine-readable storage medium;
the machine-readable storage medium stores machine-executable instructions executable by the processor;
the processor is configured to execute machine executable instructions to perform the method steps of any of claims 1-9.
CN202010238631.2A 2020-03-30 2020-03-30 Traffic incident detection method and device Pending CN111753634A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010238631.2A CN111753634A (en) 2020-03-30 2020-03-30 Traffic incident detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010238631.2A CN111753634A (en) 2020-03-30 2020-03-30 Traffic incident detection method and device

Publications (1)

Publication Number Publication Date
CN111753634A true CN111753634A (en) 2020-10-09

Family

ID=72673195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010238631.2A Pending CN111753634A (en) 2020-03-30 2020-03-30 Traffic incident detection method and device

Country Status (1)

Country Link
CN (1) CN111753634A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052048A (en) * 2021-03-18 2021-06-29 北京百度网讯科技有限公司 Traffic incident detection method and device, road side equipment and cloud control platform
CN113205037A (en) * 2021-04-28 2021-08-03 北京百度网讯科技有限公司 Event detection method and device, electronic equipment and readable storage medium
CN113936465A (en) * 2021-10-26 2022-01-14 公安部道路交通安全研究中心 Traffic incident detection method and device
WO2023115977A1 (en) * 2021-12-22 2023-06-29 杭州海康威视系统技术有限公司 Event detection method, apparatus, and system, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945603A (en) * 2012-10-26 2013-02-27 青岛海信网络科技股份有限公司 Method for detecting traffic event and electronic police device
CN103366571A (en) * 2013-07-03 2013-10-23 河南中原高速公路股份有限公司 Intelligent method for detecting traffic accident at night
CN103971521A (en) * 2014-05-19 2014-08-06 清华大学 Method and device for detecting road traffic abnormal events in real time
CN104809874A (en) * 2015-04-15 2015-07-29 东软集团股份有限公司 Traffic accident detection method and device
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102945603A (en) * 2012-10-26 2013-02-27 青岛海信网络科技股份有限公司 Method for detecting traffic event and electronic police device
CN103366571A (en) * 2013-07-03 2013-10-23 河南中原高速公路股份有限公司 Intelligent method for detecting traffic accident at night
CN103971521A (en) * 2014-05-19 2014-08-06 清华大学 Method and device for detecting road traffic abnormal events in real time
CN104809874A (en) * 2015-04-15 2015-07-29 东软集团股份有限公司 Traffic accident detection method and device
WO2020000251A1 (en) * 2018-06-27 2020-01-02 潍坊学院 Method for identifying video involving violation at intersection based on coordinated relay of video cameras

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052048A (en) * 2021-03-18 2021-06-29 北京百度网讯科技有限公司 Traffic incident detection method and device, road side equipment and cloud control platform
CN113205037A (en) * 2021-04-28 2021-08-03 北京百度网讯科技有限公司 Event detection method and device, electronic equipment and readable storage medium
CN113205037B (en) * 2021-04-28 2024-01-26 北京百度网讯科技有限公司 Event detection method, event detection device, electronic equipment and readable storage medium
CN113936465A (en) * 2021-10-26 2022-01-14 公安部道路交通安全研究中心 Traffic incident detection method and device
CN113936465B (en) * 2021-10-26 2023-08-18 公安部道路交通安全研究中心 Traffic event detection method and device
WO2023115977A1 (en) * 2021-12-22 2023-06-29 杭州海康威视系统技术有限公司 Event detection method, apparatus, and system, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN111753634A (en) Traffic incident detection method and device
CN109637151B (en) Method for identifying illegal driving of emergency lane on highway
CN112069643B (en) Automatic driving simulation scene generation method and device
CN109584578A (en) The method and apparatus of traveling lane for identification
CN105843943B (en) Vehicle permanent residence analysis method
CN110619279B (en) Road traffic sign instance segmentation method based on tracking
CN110795813A (en) Traffic simulation method and device
WO2013186662A1 (en) Multi-cue object detection and analysis
KR20180046798A (en) Method and apparatus for real time traffic information provision
WO2015089867A1 (en) Traffic violation detection method
CN110032947B (en) Method and device for monitoring occurrence of event
CN101923784A (en) Traffic light regulating system and method
CN109615862A (en) Road vehicle movement of traffic state parameter dynamic acquisition method and device
CN110532916A (en) A kind of motion profile determines method and device
Shirke et al. Lane datasets for lane detection
CN113903008A (en) Ramp exit vehicle violation identification method based on deep learning and trajectory tracking
CN113380021B (en) Vehicle state detection method, device, server and computer readable storage medium
CN106919939A (en) A kind of traffic signboard Tracking Recognition method and system
Satzoda et al. Drive analysis using lane semantics for data reduction in naturalistic driving studies
CN109300313B (en) Illegal behavior detection method, camera and server
CN113112813B (en) Illegal parking detection method and device
CN108682154B (en) Road congestion detection system based on deep learning analysis of traffic flow state change
CN113392680B (en) Road identification device and method and electronic equipment
JP2000231693A (en) Vehicle passage monitoring/managing device
CN111179610A (en) Control method and device of traffic signal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination