CN112507957A - Vehicle association method and device, road side equipment and cloud control platform - Google Patents
Vehicle association method and device, road side equipment and cloud control platform Download PDFInfo
- Publication number
- CN112507957A CN112507957A CN202011520233.6A CN202011520233A CN112507957A CN 112507957 A CN112507957 A CN 112507957A CN 202011520233 A CN202011520233 A CN 202011520233A CN 112507957 A CN112507957 A CN 112507957A
- Authority
- CN
- China
- Prior art keywords
- observation
- current vehicle
- point
- sequence
- relative
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000001514 detection method Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/584—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/0104—Measuring and analyzing of parameters relative to traffic conditions
- G08G1/0108—Measuring and analyzing of parameters relative to traffic conditions based on the source of data
- G08G1/0116—Measuring and analyzing of parameters relative to traffic conditions based on the source of data from roadside infrastructure, e.g. beacons
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a vehicle association method, a vehicle association device, roadside equipment and a cloud control platform, and relates to the technical field of artificial intelligence, in particular to the field of intelligent transportation. The specific implementation scheme is as follows: acquiring an image of a current vehicle running on a road according to a preset period for each observation point; determining an original observation sequence of each observation point relative to the current vehicle based on the image acquired by each observation point within a preset time period; determining a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point relative to the current vehicle; and detecting whether the current vehicles observed by every two observation points are the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle. The method and the device can effectively prevent the error association caused by the single-frame image abnormity, thereby improving the association success rate.
Description
Technical Field
The disclosure relates to the technical field of artificial intelligence, and further relates to an intelligent traffic technology, in particular to a vehicle association method and device, roadside equipment and a cloud control platform.
Background
Vehicle association is a core subject of intelligent transportation and related technologies nowadays. In a real environment, all information of a vehicle to be observed cannot be accurately obtained only by a single observation point. The information collected by different observation points in different directions and different angles is different. It becomes necessary to combine data from different observation points for the same vehicle to obtain highly accurate information about the vehicle in various directions or angles.
Most of the traditional association methods are based on single-frame images, and the single-frame images have limitations, cannot accurately associate images acquired by different observation points, and have low association success rate.
Disclosure of Invention
The invention provides a vehicle association method, a vehicle association device, roadside equipment and a cloud control platform, which can effectively prevent mistaken association caused by single-frame image abnormity, so that association success rate can be improved.
According to a first aspect of the present disclosure, there is provided a vehicle association method, the method comprising:
acquiring an image of a current vehicle running on a road according to a preset period for each observation point;
determining an original observation sequence of each observation point relative to the current vehicle based on an image acquired by each observation point within a preset time period;
determining a target observation sequence of each observation relative to the current vehicle based on an original observation sequence of each observation point for the current vehicle;
and detecting whether the current vehicles observed by every two observation points are the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle.
According to a second aspect of the present disclosure, there is provided a vehicle-associated device, the device comprising: the device comprises an acquisition module, a determination module and a detection module; wherein,
the acquisition module is used for acquiring images of the current vehicle running on the road according to a preset period for each observation point;
the determining module is used for determining an original observation sequence of each observation point relative to the current vehicle based on the image acquired by each observation point within a preset time period; determining a target observation sequence of each observation relative to the current vehicle based on an original observation sequence of each observation point for the current vehicle;
the detection module is used for detecting whether the current vehicle observed by each two observation points is the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the vehicle association method of any embodiment of the present application.
According to a fourth aspect of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements a vehicle association method as described in any embodiment of the present application.
According to a fifth aspect of the present disclosure, there is provided a computer program product for implementing the vehicle association method of any of the embodiments of the present application when executed by a computer device.
According to a sixth aspect of the present disclosure, a roadside apparatus is provided, which includes the electronic apparatus according to the embodiment of the present application.
According to a seventh aspect of the present disclosure, a cloud control platform is provided, which includes the electronic device according to the embodiment of the present application.
According to the technical scheme, the technical problems that in the prior art, association is carried out based on a single-frame image, the single-frame image has limitation, images collected by different observation points cannot be accurately associated, and association success rate is low are solved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a first schematic flow chart diagram of a vehicle association method provided by an embodiment of the application;
FIG. 2 is a second flow chart of a vehicle association method provided by an embodiment of the application;
FIG. 3 is a third schematic flow chart of a vehicle association method provided by an embodiment of the application;
FIG. 4 is a view of a vehicle driving on a road according to the present disclosure;
FIG. 5 is a schematic structural diagram of a vehicle-related device provided by an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a detection module provided in an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a vehicle association method according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example one
Fig. 1 is a first flowchart of a vehicle association method provided in an embodiment of the present application, where the method may be executed by a vehicle association apparatus or an electronic device or a roadside device, where the apparatus or the electronic device or the roadside device may be implemented by software and/or hardware, and the apparatus or the electronic device or the roadside device may be integrated in any intelligent device with a network communication function. As shown in fig. 1, the vehicle association method may include the steps of:
s101, acquiring an image of the current vehicle running on the road according to a preset period for each observation point.
In this step, for each observation point, the electronic device may acquire an image of the current vehicle traveling on the road according to a preset cycle. The observation point in the present application may be various types of image capturing apparatuses, such as a camera, a video camera, and the like. Specifically, during the driving of the vehicle, the electronic device may capture images of the vehicle according to a preset period, for example, the capture frequency may be 60 Hz.
S102, determining an original observation sequence of each observation point relative to the current vehicle based on the image acquired by each observation point in a preset time period.
In this step, the electronic device may determine an original observation sequence of each observation point with respect to the current vehicle based on an image acquired by each observation point within a preset time period. Specifically, the electronic device may determine, based on the image acquired by each observation point within the preset time period, a position of the current vehicle at the time when each observation point acquires each image; then, according to the position of the current vehicle when each observation point collects each image and the time point when each observation point collects each image, determining the observation data of each observation point on each time point; wherein, the observation data comprises: a point in time and a location; and then obtaining an original observation sequence of each observation point relative to the current vehicle according to the observation data of each observation point at each time point.
S103, determining a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point relative to the current vehicle.
In this step, the electronic device may determine a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point for the current vehicle. Specifically, the electronic device may detect whether each observation point satisfies an interception condition with respect to each observation data in the original observation sequence of the current vehicle according to each observation point; if the original observation sequence of each observation point relative to the current vehicle meets the interception condition, the electronic device may intercept, based on the time point in each observation data in each original observation sequence, the original observation sequence of each observation point relative to the current vehicle, so as to obtain a target observation sequence of each observation relative to the current vehicle.
And S104, detecting whether the current vehicles observed by every two observation points are the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle.
In this step, the electronic device may detect whether the current vehicle observed at each of the two observation points is the same vehicle based on a target observation sequence of each observation with respect to the current vehicle. Specifically, the electronic device may calculate, based on a target observation sequence of each observation with respect to the current vehicle, an average length of a travel locus of the current vehicle observed by each two observation points, and an area between the travel loci of the current vehicle observed by each two observation points; then, calculating the similarity of the current vehicle observed by each two observation points according to the area between the driving tracks of the current vehicle observed by each two observation points and the average length of the driving tracks of the current vehicle observed by each two observation points; and detecting whether the current vehicles observed by each two observation points are the same vehicle or not according to the similarity of the current vehicles observed by each two observation points.
According to the vehicle association method provided by the embodiment of the application, for each observation point, images of a current vehicle running on a road are collected according to a preset period; then determining an original observation sequence of each observation point relative to the current vehicle based on the image acquired by each observation point in a preset time period; then, determining a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point relative to the current vehicle; and then detecting whether the current vehicles observed by every two observation points are the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle. That is, the present application may perform vehicle association based on the original observation sequence of each observation point with respect to the current vehicle. In the existing vehicle association methods, most of the conventional association methods are based on a single frame image. Because the technical means of vehicle association based on the original observation sequence of each observation point relative to the current vehicle is adopted, the technical problems that in the prior art, association is carried out based on a single-frame image, the single-frame image has limitation, images collected by different observation points cannot be accurately associated, and the association success rate is low are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example two
Fig. 2 is a second flowchart of a vehicle association method provided in the embodiment of the present application. As shown in fig. 2, the vehicle association method may include the steps of:
s201, acquiring an image of the current vehicle running on the road according to a preset period for each observation point.
S202, determining the position of the current vehicle when each observation point collects each image based on the images collected by each observation point in a preset time period.
In this step, the electronic device may determine, based on the images acquired by each observation point within the preset time period, a position where the current vehicle is located when each observation point acquires each image. Specifically, the electronic device may recognize images acquired by each observation point within a preset time period, for example, the electronic device may input the images acquired by each observation point within the preset time period into a pre-trained image recognition model, and determine, by the image recognition model, a position where the current vehicle is located when each observation point acquires each image.
S203, determining the observation data of each observation point on each time point according to the position of the current vehicle when each observation point acquires each image and the time point when each observation point acquires each image; wherein, the observation data comprises: point in time and location.
In this step, the electronic device may determine observation data of each observation point at each time point according to a position of the current vehicle when each observation point acquires each image and a time point at which each observation point acquires each image; wherein, the observation data comprises: point in time and location. Specifically, assume that the positions of the current vehicle at which the observation point a was located when each image was acquired are a1, a2, A3, …, An; n is a natural number greater than or equal to 1; then a1 represents the observation at observation point a at the first point in time; a2 represents observation data at observation point a at a second time point; and so on.
And S204, obtaining an original observation sequence of each observation point relative to the current vehicle according to the observation data of each observation point at each time point.
S205, determining a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point relative to the current vehicle.
And S206, detecting whether the current vehicles observed by every two observation points are the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle.
According to the vehicle association method provided by the embodiment of the application, for each observation point, images of a current vehicle running on a road are collected according to a preset period; then determining an original observation sequence of each observation point relative to the current vehicle based on the image acquired by each observation point in a preset time period; then, determining a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point relative to the current vehicle; and then detecting whether the current vehicles observed by every two observation points are the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle. That is, the present application may perform vehicle association based on the original observation sequence of each observation point with respect to the current vehicle. In the existing vehicle association methods, most of the conventional association methods are based on a single frame image. Because the technical means of vehicle association based on the original observation sequence of each observation point relative to the current vehicle is adopted, the technical problems that in the prior art, association is carried out based on a single-frame image, the single-frame image has limitation, images collected by different observation points cannot be accurately associated, and the association success rate is low are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
EXAMPLE III
FIG. 3 is a third flowchart of a vehicle association method according to an embodiment of the disclosure. As shown in fig. 3, the vehicle association method may include the steps of:
s301, acquiring images of the current vehicle running on the road according to a preset period for each observation point.
S302, determining an original observation sequence of each observation point relative to the current vehicle based on the images acquired by each observation point in a preset time period.
And S303, determining a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point relative to the current vehicle.
In this step, the electronic device may determine a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point for the current vehicle. Specifically, the electronic device may detect whether each observation point satisfies an interception condition with respect to each observation data in the original observation sequence of the current vehicle according to each observation point; if the original observation sequence of each observation point relative to the current vehicle meets the interception condition, the electronic device may intercept, based on the time point in each observation data in each original observation sequence, the original observation sequence of each observation point relative to the current vehicle to obtain a target observation sequence of each observation relative to the current vehicle. For example, assume that for observation point a, the acquired original observation sequences are: a1, a2, A3, a4, a5, a6, a7, A8, a9, a 10; aiming at the observation point B, the collected original observation sequences are respectively as follows: b7, B8, B9 and B10. Therefore, the original observation sequence collected by observation point a can be intercepted by this step to obtain a7, A8, a9 and a10, which can be aligned with the original observation sequence collected by observation point B.
S304, calculating the average length of the running track of the current vehicle observed by every two observation points and the area between the running tracks of the current vehicle observed by every two observation points based on the target observation sequence of each observation relative to the current vehicle.
In this step, the electronic device may calculate, based on the target observation sequence of each observation with respect to the current vehicle, an average length of the travel locus of the current vehicle observed by each two observation points, and an area between the travel loci of the current vehicle observed by each two observation points. Specifically, when calculating the average length of the travel track of the current vehicle observed at every two observation points, the electronic device may extract the position in each observation datum in a target observation sequence of each observation relative to the current vehicle; then extracting the position in each observation datum based on the target observation sequence of each observation relative to the current vehicle, and calculating the length of the running track of the current vehicle observed by each observation point; and then, calculating the average length of the running track of the current vehicle observed by every two observation points according to the length of the running track of the current vehicle observed by each observation point. For example, assume that the positions where the current vehicle is located when each image is acquired at observation point a are a1, a2, A3, …, An; n is a natural number greater than or equal to 1; assuming that the positions of the current vehicle when the observation point B collects each image are B1, B2, B3, … and Bm; m is a natural number greater than or equal to 1; the length of the travel track observed by observation point a to the current vehicle can be expressed as: dist (A1, A2, A3, …, An); the length of the travel track of the current vehicle observed by observation point B can be expressed as: dist (B1, B2, B3, …, Bm); the average length of the travel track of the current vehicle observed by observation point a and observation point B can be expressed as: length [ dist (a1, a2, A3, …, An) + dist (B1, B2, B3, …, Bm) ]/2.
S305, calculating the similarity of the current vehicle observed by each two observation points according to the area between the running tracks of the current vehicle observed by each two observation points and the average length of the running tracks of the current vehicle observed by each two observation points.
In this step, the electronic device may calculate the similarity of the current vehicle observed by each two observation points according to the area between the travel tracks of the current vehicle observed by each two observation points and the average length of the travel tracks of the current vehicle observed by each two observation points. Specifically, assuming that the area between every two observation points observing the running track of the current vehicle is SA; assuming that the average Length of the travel track observed by each two observation points of the current vehicle is Length, the electronic device may calculate the similarity of the current vehicle observed by each two observation points by using the following formula: simiarity ═ 1-SA/(Length × Length).
And S306, detecting whether the current vehicles observed by every two observation points are the same vehicle or not according to the similarity of the current vehicles observed by every two observation points.
Fig. 4 is a scene diagram of a vehicle driving on a road according to the present application. As shown in fig. 4, the deviation of the same vehicle is large in some single-frame situations, and the trajectory similarity is high, so that the noise can be removed by using the trajectory similarity method of area division, and the false association of the moving object caused by inaccurate single-frame information is avoided.
According to the vehicle association method provided by the embodiment of the application, for each observation point, images of a current vehicle running on a road are collected according to a preset period; then determining an original observation sequence of each observation point relative to the current vehicle based on the image acquired by each observation point in a preset time period; then, determining a target observation sequence of each observation relative to the current vehicle based on the original observation sequence of each observation point relative to the current vehicle; and then detecting whether the current vehicles observed by every two observation points are the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle. That is, the present application may perform vehicle association based on the original observation sequence of each observation point with respect to the current vehicle. In the existing vehicle association methods, most of the conventional association methods are based on a single frame image. Because the technical means of vehicle association based on the original observation sequence of each observation point relative to the current vehicle is adopted, the technical problems that in the prior art, association is carried out based on a single-frame image, the single-frame image has limitation, images collected by different observation points cannot be accurately associated, and the association success rate is low are solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example four
Fig. 5 is a schematic structural diagram of a vehicle-related device provided in an embodiment of the present application. As shown in fig. 5, the apparatus 500 includes: an acquisition module 501, a determination module 502 and a detection module 503; wherein,
the acquisition module 501 is configured to acquire, for each observation point, an image of a current vehicle traveling on a road according to a preset period;
the determining module 502 is configured to determine an original observation sequence of each observation point relative to the current vehicle based on an image acquired by each observation point within a preset time period; determining a target observation sequence of each observation relative to the current vehicle based on an original observation sequence of each observation point for the current vehicle;
the detecting module 503 is configured to detect whether the current vehicle observed at each two observation points is the same vehicle based on a target observation sequence of each observation relative to the current vehicle.
Further, the determining module 502 is specifically configured to determine, based on the image acquired by each observation point within the preset time period, a position of the current vehicle when each observation point acquires each image; determining the observation data of each observation point on each time point according to the position of the current vehicle when each observation point acquires each image and the time point when each observation point acquires each image; wherein the observation data comprises: a point in time and a location; and obtaining an original observation sequence of each observation point relative to the current vehicle according to the observation data of each observation point at each time point.
Further, the determining module 502 is specifically configured to detect, according to each observation data of each observation point in the original observation sequence of the current vehicle, whether each observation point satisfies an intercepting condition with respect to the original observation sequence of the current vehicle; if the original observation sequence of each observation point relative to the current vehicle meets the interception condition, intercepting the original observation sequence of each observation point relative to the current vehicle based on the time point in each observation data in each original observation sequence to obtain a target observation sequence of each observation relative to the current vehicle.
Fig. 6 is a schematic structural diagram of a detection module provided in an embodiment of the present application. As shown in fig. 6, the detection module 503 includes: a calculation sub-module 5031 and a detection sub-module 5032; wherein,
the calculating sub-module 5031 is configured to calculate, based on a target observation sequence of each observation relative to the current vehicle, an average length of the travel locus of the current vehicle observed by each two observation points and an area between the travel loci of the current vehicle observed by each two observation points; calculating the similarity of the current vehicle observed by each two observation points according to the area between the running tracks of the current vehicle observed by each two observation points and the average length of the running tracks of the current vehicle observed by each two observation points;
the detection submodule 5032 is configured to detect whether the current vehicle observed at each of the two observation points is the same vehicle according to the similarity of the current vehicle observed at each of the two observation points.
Further, the calculating sub-module 5031 is specifically configured to extract a position in each observation datum from the target observation sequence of each observation relative to the current vehicle; extracting the position of each observation datum from a target observation sequence of each observation relative to the current vehicle, and calculating the length of the running track of the current vehicle observed by each observation point; and calculating the average length of the running track of the current vehicle observed by every two observation points according to the length of the running track of the current vehicle observed by each observation point.
The vehicle-related device can execute the method provided by any embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not elaborated in the present embodiment, reference may be made to a vehicle association method provided in any embodiment of the present application.
EXAMPLE five
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), blockchain networks, and the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
According to the embodiment disclosed in the application, the disclosure further provides the road side equipment and the cloud control platform, and the road side equipment and the cloud control platform can comprise the electronic equipment disclosed in the embodiment of the application. The roadside device may include a communication unit and the like in addition to the electronic device, and the electronic device may be integrated with the communication unit or may be provided separately. The electronic device may acquire data, such as pictures and videos, from a sensing device (e.g., a roadside camera) for video processing and data computation.
The cloud control platform executes processing at a cloud end, and electronic equipment included in the cloud control platform can acquire data of sensing equipment (such as a roadside camera), such as pictures, videos and the like, so as to perform video processing and data calculation; the cloud control platform can also be called a vehicle-road cooperative management platform, an edge computing platform, a cloud computing platform, a central system, a cloud server and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (15)
1. A vehicle association method, the method comprising:
acquiring an image of a current vehicle running on a road according to a preset period for each observation point;
determining an original observation sequence of each observation point relative to the current vehicle based on an image acquired by each observation point within a preset time period;
determining a target observation sequence of each observation relative to the current vehicle based on an original observation sequence of each observation point for the current vehicle;
and detecting whether the current vehicles observed by every two observation points are the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle.
2. The method of claim 1, wherein determining an original observation sequence of each observation point relative to the current vehicle based on the images acquired by each observation point over a preset time period comprises:
determining the position of the current vehicle when each observation point acquires each image based on the images acquired by each observation point in the preset time period;
determining the observation data of each observation point on each time point according to the position of the current vehicle when each observation point acquires each image and the time point when each observation point acquires each image; wherein the observation data comprises: a point in time and a location;
and obtaining an original observation sequence of each observation point relative to the current vehicle according to the observation data of each observation point at each time point.
3. The method of claim 1, the determining a target sequence of observations of each observation relative to the current vehicle based on an original sequence of observations of each observation point for the current vehicle, comprising:
detecting whether each observation point meets an interception condition relative to the original observation sequence of the current vehicle according to each observation data of each observation point in the original observation sequence of the current vehicle;
if the original observation sequence of each observation point relative to the current vehicle meets the interception condition, intercepting the original observation sequence of each observation point relative to the current vehicle based on the time point in each observation data in each original observation sequence to obtain a target observation sequence of each observation relative to the current vehicle.
4. The method of claim 1, wherein detecting whether the current vehicle observed by each two observation points is the same vehicle based on a target sequence of observations relative to the current vehicle comprises:
calculating the average length of the running track of the current vehicle observed by every two observation points and the area between the running tracks of the current vehicle observed by every two observation points based on the target observation sequence of each observation relative to the current vehicle;
calculating the similarity of the current vehicle observed by each two observation points according to the area between the running tracks of the current vehicle observed by each two observation points and the average length of the running tracks of the current vehicle observed by each two observation points;
and detecting whether the current vehicles observed by every two observation points are the same vehicle or not according to the similarity of the current vehicles observed by every two observation points.
5. The method of claim 4, the calculating an average length of travel trajectory each two observation points observes the current vehicle based on a target sequence of observations of each observation relative to the current vehicle, comprising:
extracting the position in each observation datum from a target observation sequence of each observation relative to the current vehicle;
extracting the position of each observation datum from a target observation sequence of each observation relative to the current vehicle, and calculating the length of the running track of the current vehicle observed by each observation point;
and calculating the average length of the running track of the current vehicle observed by every two observation points according to the length of the running track of the current vehicle observed by each observation point.
6. A vehicle association apparatus, the apparatus comprising: the device comprises an acquisition module, a determination module and a detection module; wherein,
the acquisition module is used for acquiring images of the current vehicle running on the road according to a preset period for each observation point;
the determining module is used for determining an original observation sequence of each observation point relative to the current vehicle based on the image acquired by each observation point within a preset time period; determining a target observation sequence of each observation relative to the current vehicle based on an original observation sequence of each observation point for the current vehicle;
the detection module is used for detecting whether the current vehicle observed by each two observation points is the same vehicle or not based on the target observation sequence of each observation relative to the current vehicle.
7. The apparatus according to claim 6, wherein the determining module is specifically configured to determine, based on the images acquired by each observation point within the preset time period, a position of the current vehicle at the time when each observation point acquires each image; determining the observation data of each observation point on each time point according to the position of the current vehicle when each observation point acquires each image and the time point when each observation point acquires each image; wherein the observation data comprises: a point in time and a location; and obtaining an original observation sequence of each observation point relative to the current vehicle according to the observation data of each observation point at each time point.
8. The apparatus according to claim 6, wherein the determining module is specifically configured to detect, according to each observation point for each observation data in the original observation sequence of the current vehicle, whether each observation point satisfies a truncation condition with respect to the original observation sequence of the current vehicle; if the original observation sequence of each observation point relative to the current vehicle meets the interception condition, intercepting the original observation sequence of each observation point relative to the current vehicle based on the time point in each observation data in each original observation sequence to obtain a target observation sequence of each observation relative to the current vehicle.
9. The apparatus of claim 6, the detection module comprising: a calculation submodule and a detection submodule; wherein,
the calculation submodule is used for calculating the average length of the running track of the current vehicle observed by every two observation points and the area between the running tracks of the current vehicle observed by every two observation points based on the target observation sequence of each observation relative to the current vehicle; calculating the similarity of the current vehicle observed by each two observation points according to the area between the running tracks of the current vehicle observed by each two observation points and the average length of the running tracks of the current vehicle observed by each two observation points;
the detection submodule is used for detecting whether the current vehicles observed by every two observation points are the same vehicle or not according to the similarity of the current vehicles observed by every two observation points.
10. The device according to claim 9, a computation submodule, in particular for extracting, in each observation, a position in the respective observation datum in a target observation sequence with respect to the current vehicle; extracting the position of each observation datum from a target observation sequence of each observation relative to the current vehicle, and calculating the length of the running track of the current vehicle observed by each observation point; and calculating the average length of the running track of the current vehicle observed by every two observation points according to the length of the running track of the current vehicle observed by each observation point.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
13. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-5.
14. A roadside apparatus comprising the electronic apparatus of claim 11.
15. A cloud controlled platform comprising the electronic device of claim 11.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011520233.6A CN112507957B (en) | 2020-12-21 | 2020-12-21 | Vehicle association method and device, road side equipment and cloud control platform |
EP21190441.2A EP3940667A3 (en) | 2020-12-21 | 2021-08-09 | Vehicle association method and device, roadside equipment and cloud control platform |
US17/444,891 US20210390334A1 (en) | 2020-12-21 | 2021-08-11 | Vehicle association method and device, roadside equipment and cloud control platform |
KR1020210127050A KR102595678B1 (en) | 2020-12-21 | 2021-09-27 | Vehicle association method, device, roadside equipment and cloud control platform |
JP2021179818A JP7280331B2 (en) | 2020-12-21 | 2021-11-02 | Vehicle association method, vehicle association device, electronic device, computer readable storage medium, roadside device, cloud control platform and program |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011520233.6A CN112507957B (en) | 2020-12-21 | 2020-12-21 | Vehicle association method and device, road side equipment and cloud control platform |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112507957A true CN112507957A (en) | 2021-03-16 |
CN112507957B CN112507957B (en) | 2023-12-15 |
Family
ID=74923028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011520233.6A Active CN112507957B (en) | 2020-12-21 | 2020-12-21 | Vehicle association method and device, road side equipment and cloud control platform |
Country Status (5)
Country | Link |
---|---|
US (1) | US20210390334A1 (en) |
EP (1) | EP3940667A3 (en) |
JP (1) | JP7280331B2 (en) |
KR (1) | KR102595678B1 (en) |
CN (1) | CN112507957B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114202912A (en) * | 2021-11-15 | 2022-03-18 | 新奇点智能科技集团有限公司 | Traffic service providing method, device, server and storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014085316A1 (en) * | 2012-11-27 | 2014-06-05 | Cloudparc, Inc. | Controlling use of a single multi-vehicle parking space using multiple cameras |
CN103927764A (en) * | 2014-04-29 | 2014-07-16 | 重庆大学 | Vehicle tracking method combining target information and motion estimation |
US20170004386A1 (en) * | 2015-07-02 | 2017-01-05 | Agt International Gmbh | Multi-camera vehicle identification system |
WO2017157119A1 (en) * | 2016-03-18 | 2017-09-21 | 中兴通讯股份有限公司 | Method and device for identifying abnormal behavior of vehicle |
CN107545582A (en) * | 2017-07-04 | 2018-01-05 | 深圳大学 | Video multi-target tracking and device based on fuzzy logic |
TWI617998B (en) * | 2017-07-18 | 2018-03-11 | Chunghwa Telecom Co Ltd | System and method for car number identification data filtering |
CN109118766A (en) * | 2018-09-04 | 2019-01-01 | 华南师范大学 | A kind of colleague's vehicle discriminating method and device based on traffic block port |
CN110780289A (en) * | 2019-10-23 | 2020-02-11 | 北京信息科技大学 | Multi-target vehicle tracking method and device based on scene radar |
CN110874925A (en) * | 2018-08-31 | 2020-03-10 | 百度在线网络技术(北京)有限公司 | Intelligent road side unit and control method thereof |
CN111275983A (en) * | 2020-02-14 | 2020-06-12 | 北京百度网讯科技有限公司 | Vehicle tracking method, device, electronic equipment and computer-readable storage medium |
US20200265710A1 (en) * | 2019-02-20 | 2020-08-20 | Baidu Online Network Technology (Beijing) Co., Ltd. | Travelling track prediction method and device for vehicle |
CN111914664A (en) * | 2020-07-06 | 2020-11-10 | 同济大学 | Vehicle multi-target detection and track tracking method based on re-identification |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5996903B2 (en) * | 2012-03-28 | 2016-09-21 | セコム株式会社 | Moving object tracking device |
US10296828B2 (en) * | 2017-04-05 | 2019-05-21 | Here Global B.V. | Learning a similarity measure for vision-based localization on a high definition (HD) map |
US10599161B2 (en) * | 2017-08-08 | 2020-03-24 | Skydio, Inc. | Image space motion planning of an autonomous vehicle |
JP6831117B2 (en) * | 2018-11-29 | 2021-02-17 | 技研トラステム株式会社 | Moving object tracking method and image processing device used for this |
CN113840765A (en) * | 2019-05-29 | 2021-12-24 | 御眼视觉技术有限公司 | System and method for vehicle navigation |
WO2021011617A1 (en) * | 2019-07-15 | 2021-01-21 | Mobileye Vision Technologies Ltd. | Reducing stored parameters for a navigation system |
US11125575B2 (en) * | 2019-11-20 | 2021-09-21 | Here Global B.V. | Method and apparatus for estimating a location of a vehicle |
CN116783886A (en) * | 2020-06-25 | 2023-09-19 | 移动眼视觉科技有限公司 | Motion-based online calibration of multiple cameras |
WO2022087194A1 (en) * | 2020-10-21 | 2022-04-28 | IAA, Inc. | Automated vehicle condition grading |
-
2020
- 2020-12-21 CN CN202011520233.6A patent/CN112507957B/en active Active
-
2021
- 2021-08-09 EP EP21190441.2A patent/EP3940667A3/en not_active Ceased
- 2021-08-11 US US17/444,891 patent/US20210390334A1/en not_active Abandoned
- 2021-09-27 KR KR1020210127050A patent/KR102595678B1/en active IP Right Grant
- 2021-11-02 JP JP2021179818A patent/JP7280331B2/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014085316A1 (en) * | 2012-11-27 | 2014-06-05 | Cloudparc, Inc. | Controlling use of a single multi-vehicle parking space using multiple cameras |
CN103927764A (en) * | 2014-04-29 | 2014-07-16 | 重庆大学 | Vehicle tracking method combining target information and motion estimation |
US20170004386A1 (en) * | 2015-07-02 | 2017-01-05 | Agt International Gmbh | Multi-camera vehicle identification system |
WO2017157119A1 (en) * | 2016-03-18 | 2017-09-21 | 中兴通讯股份有限公司 | Method and device for identifying abnormal behavior of vehicle |
CN107204114A (en) * | 2016-03-18 | 2017-09-26 | 中兴通讯股份有限公司 | A kind of recognition methods of vehicle abnormality behavior and device |
CN107545582A (en) * | 2017-07-04 | 2018-01-05 | 深圳大学 | Video multi-target tracking and device based on fuzzy logic |
TWI617998B (en) * | 2017-07-18 | 2018-03-11 | Chunghwa Telecom Co Ltd | System and method for car number identification data filtering |
CN110874925A (en) * | 2018-08-31 | 2020-03-10 | 百度在线网络技术(北京)有限公司 | Intelligent road side unit and control method thereof |
CN109118766A (en) * | 2018-09-04 | 2019-01-01 | 华南师范大学 | A kind of colleague's vehicle discriminating method and device based on traffic block port |
US20200265710A1 (en) * | 2019-02-20 | 2020-08-20 | Baidu Online Network Technology (Beijing) Co., Ltd. | Travelling track prediction method and device for vehicle |
CN110780289A (en) * | 2019-10-23 | 2020-02-11 | 北京信息科技大学 | Multi-target vehicle tracking method and device based on scene radar |
CN111275983A (en) * | 2020-02-14 | 2020-06-12 | 北京百度网讯科技有限公司 | Vehicle tracking method, device, electronic equipment and computer-readable storage medium |
CN111914664A (en) * | 2020-07-06 | 2020-11-10 | 同济大学 | Vehicle multi-target detection and track tracking method based on re-identification |
Non-Patent Citations (1)
Title |
---|
RAVI P. RAMACHANDRAN等: "A PATTERN RECOGNITION AND FEATURE FUSION FORMULATION FOR VEmCLE REIDENTIFICATION IN INTELLIGENT TRANSPORTATION SYSTEMS", 《IEEE XPLORE》, pages 3840 - 3843 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114202912A (en) * | 2021-11-15 | 2022-03-18 | 新奇点智能科技集团有限公司 | Traffic service providing method, device, server and storage medium |
CN114202912B (en) * | 2021-11-15 | 2023-08-18 | 新奇点智能科技集团有限公司 | Traffic service providing method, device, server and storage medium |
Also Published As
Publication number | Publication date |
---|---|
EP3940667A2 (en) | 2022-01-19 |
US20210390334A1 (en) | 2021-12-16 |
EP3940667A3 (en) | 2022-03-16 |
KR102595678B1 (en) | 2023-10-27 |
JP2022098433A (en) | 2022-07-01 |
KR20210125447A (en) | 2021-10-18 |
JP7280331B2 (en) | 2023-05-23 |
CN112507957B (en) | 2023-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113205037B (en) | Event detection method, event detection device, electronic equipment and readable storage medium | |
JP7273129B2 (en) | Lane detection method, device, electronic device, storage medium and vehicle | |
EP4145844A1 (en) | Method and apparatus for detecting jitter in video, electronic device, and storage medium | |
CN112785625A (en) | Target tracking method and device, electronic equipment and storage medium | |
CN113392794B (en) | Vehicle line crossing identification method and device, electronic equipment and storage medium | |
CN112597895A (en) | Confidence determination method based on offset detection, road side equipment and cloud control platform | |
CN112528927A (en) | Confidence determination method based on trajectory analysis, roadside equipment and cloud control platform | |
CN114119990A (en) | Method, apparatus and computer program product for image feature point matching | |
CN112507957B (en) | Vehicle association method and device, road side equipment and cloud control platform | |
CN114037087A (en) | Model training method and device, depth prediction method and device, equipment and medium | |
CN113920158A (en) | Training and traffic object tracking method and device of tracking model | |
CN114429631B (en) | Three-dimensional object detection method, device, equipment and storage medium | |
CN115131315A (en) | Image change detection method, device, equipment and storage medium | |
CN114549584A (en) | Information processing method and device, electronic equipment and storage medium | |
CN112560726A (en) | Target detection confidence determining method, road side equipment and cloud control platform | |
CN114694138B (en) | Road surface detection method, device and equipment applied to intelligent driving | |
CN114490909B (en) | Object association method and device and electronic equipment | |
CN114049615B (en) | Traffic object fusion association method and device in driving environment and edge computing equipment | |
CN115906001A (en) | Multi-sensor fusion target detection method, device and equipment and automatic driving vehicle | |
CN112541465A (en) | Traffic flow statistical method and device, road side equipment and cloud control platform | |
CN114693777A (en) | Method and device for determining spatial position of traffic sign and electronic equipment | |
CN113360688A (en) | Information base construction method, device and system | |
CN117710459A (en) | Method, device and computer program product for determining three-dimensional information | |
CN114353853A (en) | Method, apparatus and computer program product for determining detection accuracy | |
JP2023535661A (en) | Vehicle lane crossing recognition method, device, electronic device, storage medium and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20211026 Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing Applicant after: Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Address before: 2 / F, baidu building, 10 Shangdi 10th Street, Haidian District, Beijing 100085 Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |