CN113052047B - Traffic event detection method, road side equipment, cloud control platform and system - Google Patents

Traffic event detection method, road side equipment, cloud control platform and system Download PDF

Info

Publication number
CN113052047B
CN113052047B CN202110290137.5A CN202110290137A CN113052047B CN 113052047 B CN113052047 B CN 113052047B CN 202110290137 A CN202110290137 A CN 202110290137A CN 113052047 B CN113052047 B CN 113052047B
Authority
CN
China
Prior art keywords
pixel
target
information
determining
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110290137.5A
Other languages
Chinese (zh)
Other versions
CN113052047A (en
Inventor
董子超
董洪义
时一峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202110290137.5A priority Critical patent/CN113052047B/en
Publication of CN113052047A publication Critical patent/CN113052047A/en
Application granted granted Critical
Publication of CN113052047B publication Critical patent/CN113052047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors

Abstract

The application discloses a traffic event detection method, road side equipment, a cloud control platform and a system, and relates to artificial intelligence, automatic driving, intelligent traffic, vehicle-road cooperative sensing and computer vision in computer technology and image processing. Comprising the following steps: the method comprises the steps of obtaining an image to be detected corresponding to a preset road section, obtaining target pixel information corresponding to each pixel position in the image to be detected, determining a target static object in the image to be detected according to the target pixel information of each pixel position and pre-stored initial pixel information of each pixel position, determining the stay time of the target static object, and determining that a traffic event exists in the preset road section if the stay time is larger than a preset time threshold, thereby avoiding the defects of more analysis objects, higher analysis cost and higher consumption resources in the related art, realizing resource conservation, reducing analysis interference caused by more analysis objects due to the reduction of the analysis objects, and realizing the improvement of the detection accuracy and reliability.

Description

Traffic event detection method, road side equipment, cloud control platform and system
Technical Field
The application relates to artificial intelligence, automatic driving, intelligent traffic, cooperative sensing of vehicles and roads and computer vision in computer technology and image processing, in particular to a traffic event detection method, road side equipment, a cloud control platform and a system.
Background
In traffic scenarios, traffic events occur at times. For example, a motor vehicle hitting a pedestrian, a non-motor vehicle colliding with a motor vehicle, etc. To improve the safety of vehicles, pedestrians, etc., traffic events may be detected.
The traditional traffic event detection method comprises the following steps: and acquiring multi-frame images of the road section in a preset time period, judging the stay time of each vehicle in the multi-frame images according to the multi-frame images, and determining whether a traffic event occurs by combining information such as other vehicles and pedestrians around each vehicle.
However, by combining information such as other vehicles and pedestrians to determine whether a traffic event occurs, more targets to be analyzed may be caused, so that problems of high analysis complexity and low reliability of false detection are caused.
Disclosure of Invention
The application provides a traffic event detection method, road side equipment, a cloud control platform and a system for reducing analysis complexity and improving detection reliability.
According to a first aspect of the present application, there is provided a method for detecting a traffic event, including:
acquiring an image to be detected corresponding to a preset road section, and acquiring target pixel information corresponding to each pixel position in the image to be detected;
determining a target static object in the image to be detected according to target pixel information of each pixel position and pre-stored initial pixel information of each pixel position, wherein the initial pixel information is obtained by analyzing a plurality of sample images of the preset road section, and the sample images are images of the preset road section under normal traffic;
and determining the stay time of the target static object, and determining that a traffic event exists in the preset road section if the stay time is greater than a preset time threshold.
According to a second aspect of the present application, there is provided a traffic event detection device comprising:
the first acquisition unit is used for acquiring an image to be detected corresponding to a preset road section and acquiring target pixel information corresponding to each pixel position in the image to be detected;
the first determining unit is used for determining a target static object in the image to be detected according to target pixel information of each pixel position and pre-stored initial pixel information of each pixel position, wherein the initial pixel information is obtained by analyzing a plurality of sample images of the preset road section, and the sample images are images of the preset road section under normal traffic;
A second determining unit configured to determine a residence time of the target stationary object;
and the third determining unit is used for determining that the traffic event exists in the preset road section if the stay time is larger than a preset time threshold value.
According to a third aspect of the present application, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of the first aspect.
According to a fifth aspect of the present application, there is provided a computer program product comprising: a computer program stored in a readable storage medium, from which it can be read by at least one processor of an electronic device, the at least one processor executing the computer program causing the electronic device to perform the method of the first aspect.
According to a sixth aspect of the present application, there is provided a road side device comprising an electronic device as described in the third aspect.
According to a seventh aspect of the present application, there is provided a cloud control platform, including an electronic device as described in the third aspect.
According to an eighth aspect of the present application, there is provided a detection system of a traffic event, comprising: a camera, an apparatus as in the second aspect, wherein,
the camera is used for collecting images to be detected corresponding to a preset road section and sending the images to be detected to the device.
It should be understood that the description of this section is not intended to identify key or critical features of the embodiments of the application or to delineate the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The drawings are for better understanding of the present solution and do not constitute a limitation of the present application. Wherein:
FIG. 1 is a scene graph of a method of detecting traffic events in which embodiments of the present application may be implemented;
FIG. 2 is a schematic diagram according to a first embodiment of the present application;
FIG. 3 is a schematic diagram according to a second embodiment of the present application;
FIG. 4 is a schematic diagram according to a third embodiment of the present application;
FIG. 5 is a schematic diagram according to a fourth embodiment of the present application;
fig. 6 is a block diagram of an electronic device for implementing a method of detecting traffic events in accordance with an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of a traffic event detection method in which an embodiment of the present application may be implemented, as shown in fig. 1, a vehicle 101 travels on a road segment 102, one or more road side devices 103 may be disposed on at least one side of the road segment 102, and one or more cameras 104 may be disposed on at least one side of the road segment 102.
The camera 104 may collect an image, as shown in fig. 1, of the road section 102, where the image may include a vehicle 101 traveling on the road section 102.
The camera 104 may be disposed on the same side as the road side device 103, or may be disposed on a different side from the road side device 103. And the camera 104 may be connected to the roadside device 103, and may transmit the image acquired by the camera to the roadside device 103.
The road side device 103 may analyze the images transmitted by the cameras 104 to determine whether a traffic event has occurred on the road segment 102.
It should be noted that fig. 1 is only for exemplary illustration, and is not to be construed as limiting the application scenario of the traffic event detection method of the present embodiment.
For example, the application scenario shown in fig. 1 may further include more vehicles 101, more cameras 104, more road side devices 103, and so on.
Likewise, fewer vehicles 101, fewer roadside devices 103, etc. may also be included in the application scenario illustrated in fig. 1.
As another example, the application scenario shown in fig. 1 may further include a server connected to the roadside device 103, where the server may be a server, or may be a cloud server (such as a cloud control platform), and the cloud server may be preferred in consideration of computing power resources of the cloud server and the like. And if the application scene comprises a server, the server can analyze the image.
As another example, the application scenario shown in fig. 1 may further include a pedestrian walking on the road segment 102, a bicycle driving on the road segment 102, and so on.
In the related art, the traffic event detection method includes: the camera 104 collects an image and transmits the collected image to the roadside apparatus 103.
The roadside apparatus 103 determines a time for which each vehicle traveling on the road section 102 stays in a period of time (e.g., 2 minutes, etc.) from the image of the period of time, and determines whether or not a traffic event has occurred on the road section 102 by combining information of surrounding vehicles, pedestrians, etc. of each vehicle.
For example, if the stay time of the vehicle a is 1 minute and the stay time of the other vehicles is between 50 seconds and 55 seconds, the roadside apparatus 103 determines that the traffic event occurs at the road segment 102.
However, with the solution adopted in the related art, the road side device 103 needs to analyze multiple targets, for example, recording and analyzing the residence time of each vehicle and pedestrian in each image, which may cause a greater complexity of analysis, a higher consumption of analysis resources, and a lower accuracy and reliability of the detection result due to the complex analysis.
In order to avoid at least one of the above technical problems, the inventors of the embodiments of the present application have creatively worked to obtain the inventive concept of the embodiments of the present application: and determining a stationary object during detection according to the pixel information of each pixel position during detection and the pixel information of each pixel position during normal traffic, and determining whether a traffic event exists or not based on the stay time of the stationary object.
Based on the inventive concept, the application provides a traffic event detection method, road side equipment, a cloud control platform and a system, which are applied to artificial intelligence, automatic driving, intelligent traffic, vehicle-road cooperative sensing and computer vision in computer technology and image processing, so as to achieve the technical effects of improving detection efficiency and detection reliability.
Fig. 2 is a schematic diagram according to a first embodiment of the present application, and as shown in fig. 2, a traffic event detection method of the present embodiment includes:
s201: and acquiring an image to be detected corresponding to the preset road section, and acquiring target pixel information corresponding to each pixel position in the image to be detected.
The execution body of the embodiment may be a detection device of a traffic event (hereinafter referred to as a detection device), the detection device may be a server (including a local server and a cloud server, where the server may be a cloud control platform, a vehicle-road collaborative management platform, a central subsystem, an edge computing platform, a cloud computing platform, etc.), or may be a road side device, or may be a terminal device, or may be a processor, or may be a chip, etc., and the embodiment is not limited. In the system architecture of intelligent traffic road cooperation, the road side equipment comprises the road side sensing equipment and the road side computing equipment, wherein the road side sensing equipment (such as a road side camera) is connected to the road side computing equipment (such as a road side computing unit RSCU), the road side computing equipment is connected to a server, and the server can communicate with an automatic driving or assisted driving vehicle in various modes; alternatively, the roadside awareness device itself includes a computing function, and the roadside awareness device is directly connected to the server. The above connections may be wired or wireless.
It should be understood that the preset road section may be any road section, may be a road section including a traffic light, may be a road section without a traffic light, may be a road section including an intersection, may be a road section including a t-intersection, may be a road section without an intersection, and the information such as the length and the width of the road section is not limited in this embodiment.
Illustratively, the image to be detected includes a plurality of pixel positions, and a pixel position can be understood as a position in the image coordinate system corresponding to a physical point in the world coordinate system, and each pixel position corresponds to the target pixel information. For example, the target pixel information may be a pixel value.
S202: and determining a target static object in the image to be detected according to the target pixel information of each pixel position and the pre-stored initial pixel information of each pixel position.
The initial pixel information is obtained by analyzing a plurality of sample images of a preset road section, wherein the sample images are images of the preset road section under normal traffic.
For example, the detection device may acquire a plurality of preset sample images when traffic is normal (i.e. no traffic event, and the traffic event may be understood as an event that a vehicle or a pedestrian cannot normally pass due to the occurrence of the traffic event, such as an event of traffic jam caused by a collision of vehicles, an event of no pass caused by temporary rush repair of a road facility, etc.), and analyze each sample image, so as to obtain pixel information (i.e. initial pixel information) of the first pixel position.
Note that, the same pixel position is included in different images for the preset road section, but since the vehicle, the pedestrian, and the like of the preset road section have fluidity, the pixel information of the same pixel position may be the same or different in different images.
In this embodiment, the following is introduced: the detection device combines the obtained target pixel information corresponding to detection on the basis of initial pixel information obtained by analyzing an image under normal traffic, and determines the characteristic of a target static object of the image to be detected, and by the characteristic, the analyzed target (namely, the object in a motion state is reduced, so that the object in the motion state is not required to be considered and analyzed) can be relatively reduced (the relative of the related technology is taken as an example), thereby saving analysis resources, reducing errors generated by analysis through reduction of analysis, and further improving the technical effects of detection accuracy and reliability.
S203: and determining the stay time of the target static object, and determining that a traffic event exists in a preset road section if the stay time is greater than a preset time threshold.
The time threshold may be set by the detection device based on experience, history, and experiment, and the embodiment is not limited.
This step can be understood as: the detection device can determine the stay time of the target static object, judge the stay time and the time threshold value, and if the stay time is larger than the time threshold value, the detection device can determine that the traffic event exists in the preset road section.
In other embodiments, if the detection device determines that the residence time is less than the time threshold, the detection device may determine that no traffic event exists on the preset road segment.
Based on the above analysis, the embodiment of the application provides a method for detecting a traffic event, which includes: the method comprises the steps of obtaining an image to be detected corresponding to a preset road section, obtaining target pixel information corresponding to each pixel position in the image to be detected, and determining a target stationary object in the image to be detected according to the target pixel information of each pixel position and pre-stored initial pixel information of each pixel position, wherein the initial pixel information is obtained by analyzing a plurality of sample images of the preset road section, the sample images are images of the preset road section under normal traffic, the stay time of the target stationary object is determined, and if the stay time is larger than a preset time threshold, traffic events of the preset road section are determined, in the embodiment, the target stationary object in the image to be detected is determined according to the target pixel information of each pixel position and the initial pixel information, and because the initial pixel information is generated based on the image under normal traffic, the initial pixel information corresponds to the pixel information under normal traffic, the stay time of each pixel position is determined based on the initial pixel information and the target pixel information, the stay time of the target stationary object is high-efficient, the analysis of the target stationary object is avoided, and the relative analysis of the target stationary object is high-quality is reduced, the analysis of the relative motion resources is avoided due to the fact that the relative analysis of the target stationary object is high-quality is high, and the relative analysis of the moving object is high-quality is realized.
Fig. 3 is a schematic diagram according to a second embodiment of the present application, and as shown in fig. 3, a traffic event detection method of the present embodiment includes:
s301: a plurality of sample images of a preset road section are acquired.
The sample image is an image corresponding to normal traffic.
For example, if the traffic event detecting party of the present embodiment is applied to the application scenario as shown in fig. 1, this step may be understood as: the roadside device may receive a plurality of sample images transmitted by the camera.
For example, the camera may collect each image (or may be a video) of a preset road section, and transmit each collected image to the road side device.
Accordingly, the road side device may select a plurality of images when the traffic state of the preset road section is normal traffic as the sample images. The plurality of sample images may be continuous images or discontinuous images, which is not limited in this embodiment.
S302: each sample image is acquired with a pixel value at each pixel location.
For example, the roadside device may analyze each sample image to obtain a respective pixel value corresponding to each pixel position of each sample image.
S303: for any pixel position, determining initial pixel information of any pixel position according to the pixel value of each sample image at any pixel position.
Wherein the initial pixel information includes a pixel mean and a pixel variance.
The background recognition model may be constructed by background modeling, and the background recognition model may be a gaussian model, where the gaussian model has certain gaussian distribution information, and the gaussian distribution information includes a pixel mean value and a pixel variance corresponding to each pixel.
For example, the roadside device may perform an initialization process on matrix parameters of the gaussian model. The initialization process may be understood as randomly setting matrix parameters of the gaussian model.
The road side equipment can train the Gaussian model based on a plurality of sample images (which can be T frame images in a video), so that the trained Gaussian model has certain Gaussian distribution, and the Gaussian distribution information comprises pixel mean values and pixel variances corresponding to the pixels.
The training process is specifically understood to be that, for each pixel location, the mean value of the pixel location is determined, so as to obtain the mean value of the pixel location (i.e., the pixel mean value), and the variance of the pixel location (i.e., the pixel variance) is sequentially determined. It should be noted that, the principle of calculating the pixel mean and the pixel variance can be referred to the calculating principle in the related art, and will not be described herein.
In this embodiment, the pixel value of the pixel position of each sample image is determined when the traffic is normal, and then the pixel mean value and the pixel variance corresponding to each pixel position are obtained based on the analysis of each pixel value, so that the object in the motion state (such as a vehicle and a pedestrian) is filtered based on the pixel mean value and the pixel variance to obtain the target static object, thereby determining the traffic event, improving the filtering accuracy and reliability, and further realizing the technical effects of determining the accuracy and the reliability of the traffic event.
S304: and acquiring an image to be detected corresponding to the preset road section, and acquiring target pixel information corresponding to each pixel position in the image to be detected.
Illustratively, the description of S304 may be described with reference to S201, and will not be repeated here.
S305: and determining difference information between target pixel information and initial pixel information of each pixel position, and determining a target static object according to each difference information.
Based on the above analysis, the initial pixel information is determined according to the normal traffic, and in the scene of the normal traffic, the vehicle, the pedestrian, etc. are generally in a motion state, that is, the vehicle and the pedestrian are generally in an object in a non-stationary state, in this embodiment, the target pixel information of the pixel position is compared with the initial pixel information for each pixel position, and the difference information between the two (that is, the target pixel information and the initial pixel information) is determined, so that the object in a non-stationary state (that is, the object in a motion state) and the object in a stationary state (that is, the target stationary object) can be determined relatively accurately and efficiently, thereby avoiding the analysis of the object in a non-stationary state, reducing the analysis difficulty, reducing the requirement on analysis resources, saving resources, improving the analysis efficiency, and improving the accuracy of the analysis result (that is, improving the accuracy of the detection of the traffic event).
In some embodiments, the target pixel information includes a target pixel value, and S305 may include the steps of:
step 1: and calculating the difference value between the target pixel value corresponding to each pixel position and the pixel mean value.
Wherein the difference information includes a difference value.
Step 2: and obtaining pixel positions with difference values larger than preset pixel threshold values corresponding to the pixel positions respectively from the pixel positions.
In some embodiments, the pixel threshold is determined based on a pixel variance.
Step 3: and determining the pixel position with the difference value larger than the preset pixel threshold value as a target static object corresponding to the object in the image to be detected.
In combination with the above analysis, the initial pixel information includes a pixel mean value and a pixel variance, and the initial pixel information is determined based on an image under normal traffic, so for any pixel position, if the difference value is smaller, it indicates that the deviation of the target pixel value is smaller, the target pixel value may conform to the gaussian distribution, the object at any pixel position may be an object in a motion state, and if the difference value is larger, it indicates that the deviation of the target pixel value is larger, the target pixel value may not conform to the gaussian distribution, the object at any pixel position may be an object in a stationary state, and therefore, the object may be determined as a target stationary object.
Specifically, for any pixel position, the road side device may calculate a difference between the target pixel value and the pixel mean, and if the difference is less than three times of the pixel variance, it indicates that the target pixel value deviation is smaller, and the object at the any pixel position is an object in a motion state; otherwise, if the difference is smaller than the three-fold pixel variance, it indicates that the target pixel value deviation is larger, and the object at any pixel position is the object in the static state (i.e. the target static object).
It should be noted that, in this embodiment, by determining the difference between the target pixel value and the pixel mean value of each pixel position, and determining the magnitude relation between the difference and the pixel threshold (which may be understood as the association relation between the difference and the pixel variance), the target stationary object may be determined based on the magnitude relation, so that the target stationary object may be determined conveniently and quickly.
In some embodiments, the initial pixel information for the corresponding pixel location may be reconstructed based on pixel values having differences greater than the pixel threshold, e.g., in conjunction with the above example, the gaussian distribution for the corresponding pixel location may be reconstructed to obtain new pixel means and pixel variances.
In some embodiments, a difference value smaller than a preset pixel threshold value corresponding to each pixel position may be selected from the difference values, and the initial pixel information of the corresponding pixel position is updated based on the selected difference value.
For example, the corresponding pixel mean and pixel variance are updated based on the target pixel mean corresponding to the selected difference. In combination with the above example, it may be understood that, according to the embodiment, the gaussian model may be updated based on the target pixel mean value corresponding to the selected difference value, so as to obtain new gaussian analysis information, that is, obtain new pixel mean values and pixel variances.
In this embodiment, the accuracy and reliability of the pixel information may be improved by updating the initial pixel information, so as to improve the technical effect of the reliability and accuracy of the traffic event detection.
S306: and filtering the target static object according to the preset region of interest.
The interested area is an area of a non-stay position on a preset road section.
For example, the non-interested area may be a non-interested area, which may be a region that is divided from a preset road section based on requirements and may be used as a stop location, such as an emergency stop zone, and the like, and the non-interested area is a non-interested area.
This step can be understood as: the target stationary object may include vehicles, pedestrians and the like in the non-interested area, and the road side device may filter the target stationary object in the non-interested area to obtain the target stationary object in the interested area.
In the embodiment, the filtering processing is performed on the target static object according to the region of interest, so that analysis is not required to be performed on the target static object in the non-region of interest, analysis objects are reduced, analysis cost is reduced, analysis resources are saved, noise data in the analysis process is avoided, and the technical effects of accuracy and reliability of subsequent traffic event determination are improved.
S307: and identifying the category of the target static object which is reserved through filtering processing, and obtaining an identification result.
The recognition result may be a vehicle, a pedestrian, or another obstacle.
Generally, traffic events occur between vehicles and pedestrians, and thus, in the present embodiment, after filtering a target stationary object, the category of the target stationary object that remains through the filtering process is identified so that when the identification result is a vehicle or a pedestrian, a subsequent operation is performed, thereby improving the reliability of detection of traffic events.
S308: if the recognition result is a vehicle and/or a pedestrian, determining the stay time of the target stationary object which is reserved through the filtering processing.
In some embodiments, determining the dwell time of the target stationary object retained by the filtering process may include the steps of:
step 1: and determining attribute information of the target static object which is reserved through filtering processing, and acquiring pre-stored attribute information of the initial static object.
The road side equipment can obtain a static object (namely an initial static object) through detecting the first frame image, and store attribute information of the initial static object.
The attribute information may include a size attribute, a color attribute, a shape attribute, a time attribute, and the like.
Step 2: if the target static object which is reserved through filtering processing and the initial static object are determined to be the same static object according to the attribute information of the target static object which is reserved through filtering processing and the attribute information of the initial static object, the static time of the initial static object is acquired.
For example, the roadside device may determine whether attribute information corresponding to each of two objects (an initial stationary object, a target stationary object that is retained through filtering processing) is identical, and if so, it may determine a stationary time based on a time attribute of the initial stationary object, indicating that the two objects are identical objects.
For example, taking the size attribute as an example, the roadside device may determine the size attributes of the two objects respectively, determine whether the size attributes of the two objects are the same, and if so, consider the two objects to be the same object.
Specifically, the roadside apparatus may determine a time of each frame image, and if an initial still object is included in a certain frame image, may determine a stop time of the initial still object as the time of the frame image.
Step 3: the dwell time of the target stationary object retained by the filtering process is determined from the stationary time.
In connection with the above example, it is possible to determine the time of the current frame image and determine the time difference between the time of the current frame image and the rest time as the dwell time.
In the present embodiment, the technical effects of reliability and accuracy of determining the stay time can be improved by determining the stay time of the target stationary object held by the filtering processing based on the attribute information of the target stationary object held by the filtering processing and the attribute information of the initial stationary object.
S309: and if the stay time is greater than the preset time threshold value, determining that the traffic event exists in the preset road section.
Illustratively, with respect to S309, reference may be made to the partial description in S203, which is not repeated here.
S310: a prompt message and/or a driving strategy adjustment message is generated and output.
The prompt message is used for indicating the traffic event of the preset road section, and the driving strategy adjustment message is used for indicating the adjustment of the driving strategy of the vehicle based on the driving strategy adjustment message.
In combination with the above example and the application scenario shown in fig. 1, when the road side device determines that a traffic event exists, in one example, the road side device may generate a prompt message, and may transmit the prompt message to a vehicle having a connection relationship with the road side device, so as to inform the vehicle of the occurrence of the traffic event on a preset road section, so that the vehicle may make a corresponding driving policy adjustment, such as an adjustment route, based on the prompt message, so as to avoid driving on the preset road section, and so on; in another example, the road side device may generate a driving strategy adjustment message and may transmit the driving strategy adjustment message to a vehicle having a connection relationship with the road side device, so that the vehicle adjusts the driving strategy, such as route re-planning, etc., based on the driving strategy adjustment message.
By generating and outputting the presentation message and/or the driving strategy adjustment information, the technical effects of safety and reliability of the driving of the vehicle can be improved.
Fig. 4 is a schematic diagram of a third embodiment of the present application, and as shown in fig. 4, a traffic event detection device 400 of the present embodiment includes:
the first obtaining unit 401 is configured to obtain an image to be detected corresponding to a preset road section, and obtain target pixel information corresponding to each pixel position in the image to be detected.
The first determining unit 402 is configured to determine a target stationary object in an image to be detected according to target pixel information of each pixel position and pre-stored initial pixel information of each pixel position, where the initial pixel information is obtained by analyzing a plurality of sample images of a preset road section, and the sample images are images of the preset road section under normal traffic.
A second determining unit 403 for determining a dwell time of the target stationary object.
The third determining unit 404 is configured to determine that a traffic event exists in the preset road segment if the stay time is greater than the preset time threshold.
Fig. 5 is a schematic diagram of a fourth embodiment of the present application, and as shown in fig. 5, a traffic event detection device 500 of the present embodiment includes:
the second obtaining unit 501 is configured to obtain a plurality of sample images of a preset road section, where the sample images are images corresponding to normal traffic.
A third acquiring unit 502, configured to acquire each sample image, and a pixel value at each pixel position.
A fourth determining unit 503, configured to determine, for any pixel position, initial pixel information of the any pixel position according to a pixel value of each sample image at the any pixel position.
The first obtaining unit 504 is configured to obtain an image to be detected corresponding to a preset road section, and obtain target pixel information corresponding to each pixel position in the image to be detected.
The first determining unit 505 is configured to determine, according to target pixel information of each pixel position and pre-stored initial pixel information of each pixel position, a target stationary object in an image to be detected, where the initial pixel information is obtained by analyzing a plurality of sample images of a preset road section, and the sample images are images of the preset road section under normal traffic.
As can be appreciated in conjunction with fig. 5, in some embodiments, the target pixel information comprises a target pixel value and the initial pixel information comprises a pixel mean; the first determination unit 505 includes:
the calculating subunit 5051 is configured to calculate a difference between the target pixel value corresponding to each pixel position and the pixel mean value, where the difference information includes the difference.
The first obtaining subunit 5052 is configured to obtain, from each pixel position, a pixel position with a difference greater than a preset pixel threshold corresponding to each pixel position.
A first determining subunit 5053 is configured to determine, as a target stationary object, a pixel position where the difference is greater than the pixel threshold value, corresponding to the object in the image to be detected.
And an updating subunit 5054, configured to select, from the differences, a difference value smaller than the pixel threshold value corresponding to each pixel position, and update the initial pixel information of the corresponding pixel position based on the selected difference value.
The filtering unit 506 is configured to perform filtering processing on the target stationary object according to a preset region of interest, where the region of interest is a region of a non-stay position on a preset road section.
And the identifying unit 507 is configured to identify a category of the target stationary object, obtain an identification result, and if the identification result is a vehicle and/or a pedestrian, determine a residence time of the target stationary object.
A second determining unit 508 for determining a dwell time of the target stationary object.
As can be seen in connection with fig. 5, in some embodiments, the second determining unit 508 comprises:
a second determining subunit 5081, configured to determine attribute information of the target stationary object.
A second acquiring subunit 5082, configured to acquire attribute information of a pre-stored initial stationary object.
The third obtaining subunit 5083 is configured to obtain the rest time of the initial still object if it is determined that the target still object and the initial still object are the same still object according to the attribute information of the target still object and the attribute information of the initial still object.
A third determination subunit 5084 for determining a dwell time of the target stationary object from the stationary time.
The third determining unit 509 is configured to determine that a traffic event exists in a preset road segment if the stay time is greater than a preset time threshold.
The generating unit 510 is configured to generate a prompt message and/or a driving policy adjustment message, where the prompt message is used to instruct the preset road section to generate the traffic event, and the driving policy adjustment message is used to instruct the vehicle to adjust the driving policy based on the driving policy adjustment message.
An output unit 511 for outputting a prompt message and/or a driving policy adjustment message.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
According to an embodiment of the present application, there is also provided a computer program product comprising: a computer program stored in a readable storage medium, from which at least one processor of an electronic device can read, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any one of the embodiments described above.
Fig. 6 shows a schematic block diagram of an example electronic device 600 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the electronic device 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as a method of detecting traffic events. For example, in some embodiments, the method of detecting traffic events may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by the computing unit 601, one or more steps of the traffic event detection method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the method of detecting traffic events in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
According to another aspect of the embodiments of the present application, there is further provided a roadside apparatus, including the electronic apparatus described in the above embodiments.
According to another aspect of the embodiments of the present application, the embodiments of the present application further provide a cloud control platform, including the electronic device described in the foregoing embodiments.
According to another aspect of the embodiments of the present application, there is further provided a system for detecting a traffic event, including: the camera, which is configured to collect an image to be detected corresponding to a preset road section, and send the image to be detected to the device.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions disclosed in the present application can be achieved, and are not limited herein.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (23)

1. A method of detecting a traffic event, comprising:
acquiring an image to be detected corresponding to a preset road section, and acquiring target pixel information corresponding to each pixel position in the image to be detected;
determining a target static object in the image to be detected according to target pixel information of each pixel position and pre-stored initial pixel information of each pixel position, wherein the initial pixel information is obtained by analyzing a plurality of sample images of the preset road section, and the sample images are images of the preset road section under normal traffic;
determining attribute information of the target static object, and acquiring pre-stored attribute information of an initial static object;
if the target static object and the initial static object are the same static object according to the attribute information of the target static object and the attribute information of the initial static object, acquiring the static time of the initial static object;
and determining the stay time of the target static object according to the static time, and determining that a traffic event exists in the preset road section if the stay time is greater than a preset time threshold.
2. The method of claim 1, wherein determining the target stationary object in the image to be detected from the target pixel information for each pixel location and the pre-stored initial pixel information for each pixel location comprises:
and determining difference information between target pixel information and initial pixel information of each pixel position, and determining the target static object according to each difference information.
3. The method of claim 2, wherein the target pixel information comprises a target pixel value and the initial pixel information comprises a pixel mean; determining difference information between target pixel information and initial pixel information of each pixel position, and determining the target stationary object according to each difference information, including:
calculating a difference value between a target pixel value corresponding to each pixel position and a pixel mean value, wherein the difference information comprises the difference value;
obtaining pixel positions with difference values larger than preset pixel threshold values corresponding to the pixel positions from the pixel positions;
and determining the pixel position with the difference value larger than the pixel threshold value to correspond to the object in the image to be detected as the target static object.
4. A method according to claim 3, wherein the initial pixel information further comprises a pixel variance, the pixel threshold being determined based on the pixel variance.
5. The method according to any one of claims 1 to 4, wherein, before acquiring the image to be detected corresponding to the preset road section, further comprising:
acquiring a plurality of sample images of the preset road section, wherein the sample images are images corresponding to normal traffic;
acquiring each sample image, and obtaining a pixel value at each pixel position;
for any pixel position, determining initial pixel information of the any pixel position according to the pixel value of each sample image at the any pixel position.
6. The method of any of claims 1-4, wherein prior to determining the residence time of the target stationary object, further comprising:
and filtering the target static object according to a preset region of interest, wherein the region of interest is a region of a non-stay position on the preset road section.
7. The method of any one of claims 1 to 4, further comprising:
and identifying the category of the target static object to obtain an identification result, and if the identification result is a vehicle and/or a pedestrian, determining the stay time of the target static object.
8. A method according to claim 3, wherein after calculating the difference between the target pixel value and the pixel mean value for each pixel location, further comprising:
And selecting a difference value smaller than the pixel threshold value corresponding to each pixel position from the difference values, and updating the initial pixel information of the corresponding pixel position based on the selected difference value.
9. The method of any one of claims 1 to 4, further comprising, after determining that there is a traffic event for the preset road segment:
and generating and outputting a prompt message and/or a driving strategy adjustment message, wherein the prompt message is used for indicating the traffic event to occur in the preset road section, and the driving strategy adjustment message is used for indicating the adjustment of the driving strategy of the vehicle based on the driving strategy adjustment message.
10. A traffic event detection device, comprising:
the first acquisition unit is used for acquiring an image to be detected corresponding to a preset road section and acquiring target pixel information corresponding to each pixel position in the image to be detected;
the first determining unit is used for determining a target static object in the image to be detected according to target pixel information of each pixel position and pre-stored initial pixel information of each pixel position, wherein the initial pixel information is obtained by analyzing a plurality of sample images of the preset road section, and the sample images are images of the preset road section under normal traffic;
The second determining unit is used for determining attribute information of the target static object and acquiring pre-stored attribute information of the initial static object;
if the target static object and the initial static object are the same static object according to the attribute information of the target static object and the attribute information of the initial static object, acquiring the static time of the initial static object;
determining the residence time of the target stationary object according to the stationary time;
and the third determining unit is used for determining that the traffic event exists in the preset road section if the stay time is larger than a preset time threshold value.
11. The apparatus of claim 10, wherein the first determining unit is configured to determine difference information between target pixel information and initial pixel information for each pixel position, and determine the target stationary object according to each difference information.
12. The apparatus of claim 11, wherein the target pixel information comprises a target pixel value and the initial pixel information comprises a pixel mean; determining difference information between target pixel information and initial pixel information for each pixel position, the first determination unit comprising:
A calculating subunit, configured to calculate a difference value between a target pixel value corresponding to each pixel position and a pixel mean value, where the difference information includes the difference value;
a first obtaining subunit, configured to obtain, from each pixel position, a pixel position whose difference value is greater than a preset pixel threshold value corresponding to each pixel position;
and the first determination subunit is used for determining that the pixel position with the difference value larger than the pixel threshold value corresponds to the object in the image to be detected as the target static object.
13. The apparatus of claim 12, wherein the initial pixel information further comprises a pixel variance, the pixel threshold being determined based on the pixel variance.
14. The apparatus of any of claims 10 to 13, further comprising:
the second acquisition unit is used for acquiring a plurality of sample images of the preset road section, wherein the sample images are images corresponding to normal traffic;
a third obtaining unit, configured to obtain a pixel value of each pixel position of each sample image;
a fourth determining unit, configured to determine, for any pixel position, initial pixel information of the any pixel position according to a pixel value of each sample image at the any pixel position.
15. The apparatus of any of claims 10 to 13, further comprising:
the filtering unit is used for filtering the target static object according to a preset region of interest, wherein the region of interest is a region of a non-stay position on the preset road section.
16. The apparatus of any of claims 10 to 13, further comprising:
and the identification unit is used for identifying the category of the target static object to obtain an identification result, and if the identification result is a vehicle and/or a pedestrian, the identification unit is used for determining the stay time of the target static object.
17. The apparatus of claim 12, wherein the first determining unit further comprises:
and the updating subunit is used for selecting a difference value smaller than the pixel threshold value corresponding to each pixel position from the difference values and updating the initial pixel information of the corresponding pixel position based on the selected difference value.
18. The apparatus of any one of claims 10 to 13, after determining that there is a traffic event for the preset road segment, further comprising:
and generating and outputting a prompt message and/or a driving strategy adjustment message, wherein the prompt message is used for indicating the traffic event to occur in the preset road section, and the driving strategy adjustment message is used for indicating the adjustment of the driving strategy of the vehicle based on the driving strategy adjustment message.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.
20. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-9.
21. A roadside device comprising the electronic device of claim 19.
22. A cloud control platform comprising the electronic device of claim 19.
23. A system for detecting traffic events, comprising: camera, device according to any of claims 10 to 18, wherein,
the camera is used for collecting images to be detected corresponding to a preset road section and sending the images to be detected to the device.
CN202110290137.5A 2021-03-18 2021-03-18 Traffic event detection method, road side equipment, cloud control platform and system Active CN113052047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110290137.5A CN113052047B (en) 2021-03-18 2021-03-18 Traffic event detection method, road side equipment, cloud control platform and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110290137.5A CN113052047B (en) 2021-03-18 2021-03-18 Traffic event detection method, road side equipment, cloud control platform and system

Publications (2)

Publication Number Publication Date
CN113052047A CN113052047A (en) 2021-06-29
CN113052047B true CN113052047B (en) 2023-12-29

Family

ID=76513829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110290137.5A Active CN113052047B (en) 2021-03-18 2021-03-18 Traffic event detection method, road side equipment, cloud control platform and system

Country Status (1)

Country Link
CN (1) CN113052047B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113627412A (en) * 2021-08-02 2021-11-09 北京百度网讯科技有限公司 Target area detection method, target area detection device, electronic equipment and medium
CN114429702B (en) * 2021-12-30 2022-10-11 联通智网科技股份有限公司 Alarm implementation method and device

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19858477A1 (en) * 1998-12-17 2000-07-06 Siemens Ag Road traffic information detection method
KR20100119476A (en) * 2009-04-30 2010-11-09 (주) 서돌 전자통신 An outomatic sensing system for traffic accident and method thereof
CN102054277A (en) * 2009-11-09 2011-05-11 深圳市朗驰欣创科技有限公司 Method and system for detecting moving target, and video analysis system
CN102496000A (en) * 2011-11-14 2012-06-13 电子科技大学 Urban traffic accident detection method
RU2012104370A (en) * 2012-02-09 2013-08-20 Игорь Юрьевич Мацур METHOD FOR AUTOMATIC VEHICLE PARKING CONTROL
KR20150041433A (en) * 2013-10-08 2015-04-16 에스케이텔레콤 주식회사 Method and Apparatus for Detecting Object of Event
CN104537833A (en) * 2014-12-19 2015-04-22 深圳大学 Traffic abnormity detection method and system
CN105788269A (en) * 2016-05-12 2016-07-20 招商局重庆交通科研设计院有限公司 Unmanned aerial vehicle-based abnormal traffic identification method
US9418546B1 (en) * 2015-11-16 2016-08-16 Iteris, Inc. Traffic detection with multiple outputs depending on type of object detected
CN106469311A (en) * 2015-08-19 2017-03-01 南京新索奇科技有限公司 Object detection method and device
CN109919053A (en) * 2019-02-24 2019-06-21 太原理工大学 A kind of deep learning vehicle parking detection method based on monitor video
CN109934075A (en) * 2017-12-19 2019-06-25 杭州海康威视数字技术股份有限公司 Accident detection method, apparatus, system and electronic equipment
EP3578433A1 (en) * 2018-04-10 2019-12-11 Walter Steven Rosenbaum Method for estimating an accident risk of an autonomous vehicle
CN111079621A (en) * 2019-12-10 2020-04-28 北京百度网讯科技有限公司 Method and device for detecting object, electronic equipment and storage medium
CN111222381A (en) * 2018-11-27 2020-06-02 中国移动通信集团上海有限公司 User travel mode identification method and device, electronic equipment and storage medium
CN111369807A (en) * 2020-03-24 2020-07-03 北京百度网讯科技有限公司 Traffic accident detection method, device, equipment and medium
CN111402612A (en) * 2019-01-03 2020-07-10 北京嘀嘀无限科技发展有限公司 Traffic incident notification method and device
CN111832492A (en) * 2020-07-16 2020-10-27 平安科技(深圳)有限公司 Method and device for distinguishing static traffic abnormality, computer equipment and storage medium
CN112419722A (en) * 2020-11-18 2021-02-26 百度(中国)有限公司 Traffic abnormal event detection method, traffic control method, device and medium
CN112507813A (en) * 2020-11-23 2021-03-16 北京旷视科技有限公司 Event detection method and device, electronic equipment and storage medium
CN112507964A (en) * 2020-12-22 2021-03-16 北京百度网讯科技有限公司 Detection method and device for lane-level event, road side equipment and cloud control platform

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070208498A1 (en) * 2006-03-03 2007-09-06 Inrix, Inc. Displaying road traffic condition information and user controls
US20180096595A1 (en) * 2016-10-04 2018-04-05 Street Simplified, LLC Traffic Control Systems and Methods

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19858477A1 (en) * 1998-12-17 2000-07-06 Siemens Ag Road traffic information detection method
KR20100119476A (en) * 2009-04-30 2010-11-09 (주) 서돌 전자통신 An outomatic sensing system for traffic accident and method thereof
CN102054277A (en) * 2009-11-09 2011-05-11 深圳市朗驰欣创科技有限公司 Method and system for detecting moving target, and video analysis system
CN102496000A (en) * 2011-11-14 2012-06-13 电子科技大学 Urban traffic accident detection method
RU2012104370A (en) * 2012-02-09 2013-08-20 Игорь Юрьевич Мацур METHOD FOR AUTOMATIC VEHICLE PARKING CONTROL
KR20150041433A (en) * 2013-10-08 2015-04-16 에스케이텔레콤 주식회사 Method and Apparatus for Detecting Object of Event
CN104537833A (en) * 2014-12-19 2015-04-22 深圳大学 Traffic abnormity detection method and system
CN106469311A (en) * 2015-08-19 2017-03-01 南京新索奇科技有限公司 Object detection method and device
US9418546B1 (en) * 2015-11-16 2016-08-16 Iteris, Inc. Traffic detection with multiple outputs depending on type of object detected
CN105788269A (en) * 2016-05-12 2016-07-20 招商局重庆交通科研设计院有限公司 Unmanned aerial vehicle-based abnormal traffic identification method
CN109934075A (en) * 2017-12-19 2019-06-25 杭州海康威视数字技术股份有限公司 Accident detection method, apparatus, system and electronic equipment
EP3578433A1 (en) * 2018-04-10 2019-12-11 Walter Steven Rosenbaum Method for estimating an accident risk of an autonomous vehicle
CN111222381A (en) * 2018-11-27 2020-06-02 中国移动通信集团上海有限公司 User travel mode identification method and device, electronic equipment and storage medium
CN111402612A (en) * 2019-01-03 2020-07-10 北京嘀嘀无限科技发展有限公司 Traffic incident notification method and device
CN109919053A (en) * 2019-02-24 2019-06-21 太原理工大学 A kind of deep learning vehicle parking detection method based on monitor video
CN111079621A (en) * 2019-12-10 2020-04-28 北京百度网讯科技有限公司 Method and device for detecting object, electronic equipment and storage medium
CN111369807A (en) * 2020-03-24 2020-07-03 北京百度网讯科技有限公司 Traffic accident detection method, device, equipment and medium
CN111832492A (en) * 2020-07-16 2020-10-27 平安科技(深圳)有限公司 Method and device for distinguishing static traffic abnormality, computer equipment and storage medium
CN112419722A (en) * 2020-11-18 2021-02-26 百度(中国)有限公司 Traffic abnormal event detection method, traffic control method, device and medium
CN112507813A (en) * 2020-11-23 2021-03-16 北京旷视科技有限公司 Event detection method and device, electronic equipment and storage medium
CN112507964A (en) * 2020-12-22 2021-03-16 北京百度网讯科技有限公司 Detection method and device for lane-level event, road side equipment and cloud control platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Simple Free-Flow Traffic Model for Vehicular Intermittently Connected Networks;M. J. Khabbaz et al.;《IEEE Transactions on Intelligent Transportation Systems》;第13卷(第3期);1312-1326 *
基于机器视觉的交通事件检测方法的研究;苑玮琦;谢昌隆;;计算机仿真(10);全文 *

Also Published As

Publication number Publication date
CN113052047A (en) 2021-06-29

Similar Documents

Publication Publication Date Title
CN108725440B (en) Forward collision control method and apparatus, electronic device, program, and medium
CN113052047B (en) Traffic event detection method, road side equipment, cloud control platform and system
CN113012176B (en) Sample image processing method and device, electronic equipment and storage medium
CN112580571A (en) Vehicle running control method and device and electronic equipment
CN113135193B (en) Method, device, storage medium and program product for outputting early warning information
CN113011323B (en) Method for acquiring traffic state, related device, road side equipment and cloud control platform
CN112863187B (en) Detection method of perception model, electronic equipment, road side equipment and cloud control platform
CN113859264B (en) Vehicle control method, device, electronic equipment and storage medium
CN114120253A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113538963A (en) Method, apparatus, device and storage medium for outputting information
CN112966599B (en) Training method of key point recognition model, key point recognition method and device
CN113177497B (en) Training method of visual model, vehicle identification method and device
CN113920158A (en) Training and traffic object tracking method and device of tracking model
EP4080479A2 (en) Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system
CN113946729B (en) Data processing method and device for vehicle, electronic equipment and medium
CN113119999B (en) Method, device, equipment, medium and program product for determining automatic driving characteristics
CN114677848A (en) Perception early warning system, method, device and computer program product
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN113989300A (en) Lane line segmentation method and device, electronic equipment and storage medium
CN113806361B (en) Method, device and storage medium for associating electronic monitoring equipment with road
CN113947945B (en) Vehicle driving alarm method and device, electronic equipment and readable storage medium
CN112700657B (en) Method and device for generating detection information, road side equipment and cloud control platform
CN114911813B (en) Updating method and device of vehicle-mounted perception model, electronic equipment and storage medium
CN115908838B (en) Vehicle presence detection method, device, equipment and medium based on radar fusion
CN117315406B (en) Sample image processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211019

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant