CN109426791B - Multi-site and multi-vehicle matching method, server and system - Google Patents

Multi-site and multi-vehicle matching method, server and system Download PDF

Info

Publication number
CN109426791B
CN109426791B CN201710779960.6A CN201710779960A CN109426791B CN 109426791 B CN109426791 B CN 109426791B CN 201710779960 A CN201710779960 A CN 201710779960A CN 109426791 B CN109426791 B CN 109426791B
Authority
CN
China
Prior art keywords
vehicle information
vehicle
information set
station
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710779960.6A
Other languages
Chinese (zh)
Other versions
CN109426791A (en
Inventor
杨耿
何小川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Genvict Technology Co Ltd
Original Assignee
Shenzhen Genvict Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Genvict Technology Co Ltd filed Critical Shenzhen Genvict Technology Co Ltd
Priority to CN201710779960.6A priority Critical patent/CN109426791B/en
Publication of CN109426791A publication Critical patent/CN109426791A/en
Application granted granted Critical
Publication of CN109426791B publication Critical patent/CN109426791B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to a multi-station multi-vehicle matching method, which comprises the following steps: acquiring road image data respectively acquired by a first station and a second station; preprocessing the road image data acquired by the first station to obtain a first vehicle information set, and preprocessing the road image data acquired by the second station to obtain a second vehicle information set, wherein the first vehicle information set and the second vehicle information set respectively comprise a plurality of pieces of vehicle information, and the vehicle information comprises at least one vehicle characteristic; performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fusion feature data set; and matching the vehicle information corresponding to the first vehicle information set and the vehicle information corresponding to the second vehicle information set according to the fusion feature data set, and obtaining a matching result. The method adopts a multivariate information fusion scheme, so that multi-vehicle accurate matching of vehicles between stations can be carried out.

Description

Multi-site and multi-vehicle matching method, server and system
Technical Field
The invention relates to the field of Intelligent Transportation Systems (ITS), in particular to a multi-site and multi-element vehicle matching method, a server and a System.
Background
The traditional method for acquiring traffic flow information is to collect rough traffic flow data of sections of point locations, and the traffic flow data comprises information such as vehicle number, speed, occupancy rate, license plate number, vehicle type, vehicle flow and the like. This information allows the conventional traffic flow model to estimate the behavior of road segments or road networks, but this estimation is not precise and is rough, for example, the speed of road segments, travel time.
Accurate road section information can be obtained through vehicle matching between stations. Although the license plate identification and RFID technology can uniquely match vehicles among stations, the loading amount of RFID tags is limited, and the license plate identification has the problems of license plate fouling, fake license plates and the like. Especially, vehicles of the same type on the highway use the same license plate, and pass cards are exchanged to avoid fee evasion, which cannot be solved by RFID and license plate recognition. The current vehicle picture searching technology can search similar vehicles, but only can provide a series of pictures arranged according to similarity, and can not carry out unique matching on the vehicles.
Disclosure of Invention
In order to solve the above problems, the present invention provides a multi-station multi-vehicle matching method, including:
acquiring road image data respectively acquired by a first station and a second station, wherein the second station is positioned in front of the first station in the driving direction;
preprocessing the road image data acquired by the first station to obtain a first vehicle information set, and preprocessing the road image data acquired by the second station to obtain a second vehicle information set, wherein the first vehicle information set and the second vehicle information set respectively comprise a plurality of pieces of vehicle information, and the vehicle information comprises at least one vehicle characteristic;
performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fusion feature data set;
and matching the vehicle information corresponding to the first vehicle information set and the vehicle information corresponding to the second vehicle information set according to the fusion feature data set, and obtaining a matching result.
Further, the vehicle features are vehicle colors, vehicle shapes, vehicle lengths or vehicle corner points.
Further, the step of preprocessing the road image data acquired by the first station to obtain a first vehicle information set, and the step of preprocessing the road image data acquired by the second station to obtain a second vehicle information set specifically include:
selecting road image data collected by the first station in a first time period to carry out preprocessing to obtain a first vehicle information set;
and selecting a second time period according to the first time period, and acquiring road image data acquired by the second station in the second time period to carry out preprocessing to obtain a second vehicle information set.
Further, the step of performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fused feature data set specifically includes:
calculating a feature group between every two pieces of vehicle information in the first vehicle information set and every two pieces of vehicle information in the second vehicle information set, wherein the feature group comprises feature distances of at least two vehicle features between the two pieces of vehicle information or corresponding probabilities of the at least two vehicle features respectively;
and calculating fused feature data according to the feature groups, wherein the fused feature data corresponding to the feature groups form the fused feature data set.
Further, when the feature group includes a feature distance of at least two vehicle features between two pieces of vehicle information, the step of calculating fused feature data according to the feature group specifically includes:
calculating fusion feature data by adopting a feature distance direct fusion algorithm according to the feature group; or the like, or a combination thereof,
when the feature group includes corresponding probabilities of at least two vehicle features between two pieces of vehicle information, the step of calculating fused feature data according to the feature group specifically includes:
and calculating fused feature data by adopting a probability distance fusion algorithm according to the feature group.
Further, the calculating of fusion feature data by using a feature distance direct fusion algorithm according to the feature group specifically includes:
selecting ith vehicle information in a first vehicle information set, and selecting jth vehicle information from a second vehicle information set;
calculating the characteristic distance d of each vehicle characteristic between the ith vehicle information and the jth vehicle information 1 (i,j)、d 2 (i,j)、……d k (i,j)、……d K (i,j);
Then the fused feature data D ═ w 1 *d 1 (i,j)+w 2 *d 2 (i,j)+……+w k *d k (i,j)+……+w K *d K (i, j) wherein the weight w is 1 +w 2 +……+w k +……+w K =1。
Further, the calculating of fusion feature data by using a probability distance fusion algorithm according to the feature group specifically includes:
selecting ith vehicle information in a first vehicle information set, and selecting jth vehicle information from a second vehicle information set, wherein the first vehicle information set comprises N pieces of vehicle information, and the second vehicle information set comprises M pieces of vehicle information;
calculating the characteristic distance d of each vehicle characteristic between the ith vehicle information and the jth vehicle information 1 (i,j)、d 2 (i,j)、……d k (i,j)……d K (i, j) and obtaining the probability p of judging that the ith vehicle information and the jth vehicle information are the same vehicle by using corresponding vehicle characteristics when the first vehicle information set and/or the second vehicle information set comprise a plurality of vehicles 1 (d 1 (i,j))、p 1 (d 2 (i,j))、……p 1 (d k (i,j))、……p 1 (d K (i, j)), using the corresponding vehicle characteristics to judge the probability p that the ith vehicle information and the jth vehicle information are not the same vehicle 2 (d1(i,j))、p 2 (d2(i,j))、……p 2 (dk(i,j))、……p 2 (d K (i,j));
The fused feature data lnp (d) can be obtained by the following formula:
Figure BDA0001396723520000041
wherein:
Figure BDA0001396723520000042
Figure BDA0001396723520000043
Figure BDA0001396723520000044
Figure BDA0001396723520000045
Figure BDA0001396723520000046
λ (i, j) is a probability that the ith vehicle information and the jth vehicle information are the same vehicle after feature fusion, and λ (i, j) is ρ (d) which is a distance between at least two features between the ith vehicle information and the jth vehicle information k (i, j)) are obtained by weighted multiplication; λ (i, τ) is the probability that the ith vehicle information does not have any vehicle matched with it; λ (i, τ) is ξ (d) k (i, j)) multiple weighted multiplications, ξ (d) k (i, j)) is the probability that the ith vehicle information has no vehicle match using the same vehicle feature; p is the overall probability of matching a plurality of vehicle features within a feature set.
Further, the step of matching the vehicle information corresponding to the first vehicle information set and the vehicle information corresponding to the second vehicle information set according to the fused feature data set, and obtaining a matching result specifically includes:
determining relative weight between each two pieces of vehicle information in the first vehicle information set and each piece of vehicle information in the second vehicle information set, wherein the relative weight is the corresponding fusion characteristic data;
and matching by using a shortest path algorithm according to the relative weight between every two adjacent pairs, and obtaining a matching result.
In still another aspect, the present invention further discloses a multi-site multi-vehicle matching server, including:
the acquisition module is used for acquiring road image data respectively acquired by a first station and a second station, wherein the second station is positioned in front of the first station in the driving direction;
the processing module is used for preprocessing the road image data acquired by the first station to obtain a first vehicle information set, and preprocessing the road image data acquired by the second station to obtain a second vehicle information set, wherein the first vehicle information set and the second vehicle information set respectively comprise a plurality of pieces of vehicle information, and the vehicle information comprises at least one vehicle characteristic;
the fusion module is used for performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fusion feature data set;
and the matching module is used for matching the vehicle information corresponding to the first vehicle information set with the vehicle information corresponding to the second vehicle information set according to the fusion feature data set and obtaining a matching result.
In still another aspect, the invention further discloses a multi-station multi-vehicle matching system, which comprises at least 2 stations for acquiring road image data, and the stations are connected with the multi-station multi-vehicle matching server.
According to the technical scheme, a multivariate information fusion method is adopted to accurately match multiple vehicles between stations, and the vehicles are tracked according to the accurate vehicle matching, so that the problems of fee evasion, fake license plates and unlicensed vehicles are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a flow chart of a multi-site multi-vehicle matching method of the present invention;
fig. 2 is a flowchart of a method for matching based on the shortest path algorithm according to the present invention.
Detailed Description
In order to solve the above problems, the present invention provides a multi-station multi-vehicle matching method, which is applied to a system including at least two stations.
In a first embodiment, as shown in fig. 1, a multi-station multi-vehicle matching method includes:
s101: acquiring road image data respectively acquired by a first station and a second station, wherein the second station is positioned in front of the first station in the driving direction; it can be understood that, when the vehicle normally runs on the road, the target vehicle first passes through the first station, and at this time, a plurality of vehicles including the target vehicle exist in the road image data acquired by the first station. When the vehicle normally travels to the second station, a plurality of vehicles including the target vehicle also exist in the road image data acquired by the second station. In order to accurately control the road condition, the corresponding relationship between the target vehicle in the road image data collected at the first station and the road image data collected at the second station needs to be determined.
S102: preprocessing the road image data acquired by the first station to obtain a first vehicle information set, and preprocessing the road image data acquired by the second station to obtain a second vehicle information set, wherein the first vehicle information set and the second vehicle information set respectively comprise a plurality of pieces of vehicle information, and the vehicle information comprises at least one vehicle characteristic; wherein the vehicle information preferably includes at least two vehicle characteristics for more accurate matching of the vehicles. The vehicle features refer to non-unique features such as vehicle colors, vehicle shapes, vehicle lengths, vehicle corner points and the like, and unique features such as license plate recognition and the like are not listed, so that the vehicle is easy to understand.
The road image data collected by the station needs to be preprocessed for facilitating subsequent application, and the preprocessing can be used for deep learning and continuous optimization based on artificial intelligence. Through deep learning, the system can be enabled to accurately match the vehicle through a single non-unique feature. Through preprocessing, the vehicle characteristics of each vehicle in the road image data collected by the station can be extracted. Wherein the vehicle features may include global features and salient features of the vehicle. The vehicle characteristics collected by one station and the arrival time of the vehicle can be uniformly stored for the same vehicle. In the present embodiment, the vehicle features include vehicle color, vehicle shape, vehicle length, and vehicle corner points. The color of the vehicle is represented by a color histogram, the shape of the vehicle is obtained by template matching, and the corner points of the vehicle are extracted by an SURF (Speeded-Up Robust Features) algorithm.
Since the vehicle arrival time is determinable while the travel time distribution of the vehicle is determinable from the history data, for higher matching efficiency. Selecting road image data collected by the first station in a first time period to carry out preprocessing to obtain a first vehicle information set; and selecting a second time period according to the first time period, and acquiring road image data acquired by the second station in the second time period to carry out preprocessing to obtain a second vehicle information set. If the average time for the vehicle to travel from the first station to the second station is 30 minutes, N vehicles passing through the first station in a time period of 12:00-12:01 need to be matched, and a larger range can be selected to ensure that all the N vehicles in the time period can be matched, so that M vehicles passing through the second station in a time period of 12:20-12:31 are selected for matching. Of course, the selection of the second time period may be selected according to actual conditions. By selecting the time periods, the matching efficiency can be effectively improved.
S103: performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fusion feature data set;
s104: and matching the vehicle information corresponding to the first vehicle information set and the vehicle information corresponding to the second vehicle information set according to the fusion feature data set, and obtaining a matching result.
The method comprises the steps of matching N vehicles contained in a first vehicle information set and M vehicles contained in a second vehicle information set, wherein a plurality of vehicle characteristics of all vehicles in the first vehicle information set and the second vehicle information set are integrally considered in the matching process, so that an overall optimal matching scheme can be obtained, and the purpose of accurate matching is achieved, or matching can be performed through deep learning of a single non-unique characteristic, and the purpose of accurate matching is achieved. In practical application, the matching mode can be combined with an RFID technology or a license plate recognition technology to achieve a more accurate recognition requirement.
In a second embodiment, on the basis of the first embodiment, the performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fused feature data set specifically includes:
calculating a feature group between every two pieces of vehicle information in the first vehicle information set and every two pieces of vehicle information in the second vehicle information set, wherein the feature group comprises feature distances of at least two vehicle features between the two pieces of vehicle information or corresponding probabilities of the at least two vehicle features respectively;
and calculating fused feature data according to the feature groups, wherein the fused feature data corresponding to the feature groups form the fused feature data set.
Specifically, the feature distance direct fusion algorithm or the probability distance fusion algorithm is adopted to calculate fusion feature data according to the feature groups, the steps are repeated aiming at different feature groups, namely, the fusion feature data are calculated aiming at the feature groups corresponding to the vehicles in different first vehicle information sets and the vehicles in different second vehicle information sets, and a plurality of fusion feature data form a fusion feature data set. In a preferred embodiment, the fused feature data set includes fused feature data corresponding to a feature group between a vehicle in any of the first vehicle information sets and a vehicle in any of the second vehicle information sets.
When the feature group comprises feature distances of at least two vehicle features between two pieces of vehicle information, calculating fusion feature data by using a feature distance direct fusion algorithm according to the feature group specifically comprises the following steps:
selecting ith vehicle information in a first vehicle information set, and selecting jth vehicle information from a second vehicle information set;
calculating the characteristic distance d of each vehicle characteristic between the ith vehicle information and the jth vehicle information 1 (i,j)、d 2 (i,j)、……d k (i,j)、……d K (i, j); it can be understood that the vehicle information includes K types of vehicle features, the feature distance calculation methods of different types of vehicle features are not completely the same, and algorithms such as normalized cross-correlation (ncc), Sum of Absolute Difference (SAD), Sum of Squared Difference (SSD), euclidean distance, cosine distance, mahalanobis distance, etc. are selected according to the vehicle features, and are not limited herein.
Then the fused feature data D ═ w 1 *d 1 (i,j)+w 2 *d 2 (i,j)+……+w k *d k (i,j)+……+w K *d K (i, j) wherein the weight w is 1 +w 2 +……+w k +……+w K 1. Wherein the weight values w are k The most common calculation method is the least square method, a group of historical data such as 300 matched vehicles is selected, the D of the same vehicle is minimized through the least square method, and the D of different vehicles is maximized through the least square method, so that w is obtained k . In addition to least squares, it is also possible to use a single d k (i, j) see its success rate in the history, w k =c k /sum(c 1 :c k ),c k Is d k The corresponding success rate is that the weight of the feature distance is equal to the contribution degree of the power in the historical data summed at the success rate of all the features.
And repeatedly adopting a characteristic distance direct fusion algorithm to calculate fusion characteristic data until corresponding fusion characteristic data are generated between each vehicle in the first vehicle information set and each vehicle in the second vehicle information set.
In other embodiments, when the feature group includes respective corresponding probabilities of at least two vehicle features between two pieces of vehicle information, the fused feature data may be further calculated by using a probability-distance fusion algorithm according to the feature group, where the corresponding probability of a vehicle feature refers to a probability that the vehicle feature matches the current vehicle feature to the same vehicle, specifically:
selecting ith vehicle information in a first vehicle information set, and selecting jth vehicle information from a second vehicle information set;
calculating the characteristic distance d of each vehicle characteristic between the ith vehicle information and the jth vehicle information 1 (i,j)、d 2 (i,j)、……d k (i,j)、……d K (i, j) obtaining a probability p of judging that the ith vehicle information and the jth vehicle information are the same vehicle by using corresponding vehicle characteristics when the first vehicle information set and/or the second vehicle information set comprise a plurality of vehicles 1 (d 1 (i,j))、p 1 (d 2 (i,j))、……p 1 (d k (i,j))、……p 1 (d K (i, j)), using the corresponding vehicle characteristics to determine the probability p that the ith vehicle information and the jth vehicle information are not the same vehicle 2 (d1(i,j))、p 2 (d2(i,j))、……p 2 (dk(i,j))、……p 2 (d K (i,j));
The fused feature data lnp (d) can be obtained by the following formula:
Figure BDA0001396723520000101
wherein:
Figure BDA0001396723520000102
Figure BDA0001396723520000103
Figure BDA0001396723520000104
Figure BDA0001396723520000105
Figure BDA0001396723520000106
λ (i, j) is a probability that the ith vehicle information and the jth vehicle information are the same vehicle after feature fusion, and λ (i, j) is ρ (d) of at least two feature distances between the ith vehicle information and the jth vehicle information k (i, j)) weight multiplication; λ (i, τ) is the probability that the ith vehicle information does not have any vehicle matched with it; λ (i, j) is ξ (d) k (i, j)) multiple weighted multiplications, ξ (d) k (i, j)) is the probability that the ith vehicle information does not have a vehicle match using the same vehicle feature.
And (4) calculating fusion characteristic data by repeatedly adopting the probability distance fusion algorithm as the characteristic distance direct fusion algorithm until corresponding fusion characteristic data are generated between each vehicle in the first vehicle information set and each vehicle in the second vehicle information set.
After fusion feature data between each vehicle in a first vehicle information set and each vehicle in a second vehicle information set is obtained, matching vehicle information corresponding to the first vehicle information set and vehicle information corresponding to the second vehicle information set according to the fusion feature data set, and obtaining a matching result specifically comprises the following steps:
determining relative weight between each two pieces of vehicle information in the first vehicle information set and each piece of vehicle information in the second vehicle information set, wherein the relative weight is the corresponding fusion characteristic data;
and matching by using a shortest path algorithm according to the relative weight between every two to obtain a matching result.
Specifically, as shown in fig. 2, the left side i-1 and i-2 are vehicles in the first vehicle information set, and the right side j-1, j-2, j-3, and j-4 are vehicles in the second vehicle information set. And assigning the fusion characteristic data obtained by calculation in the previous step as relative weights to the weights between every two, wherein the matching problem becomes the matching problem of the minimum value and the maximum value in the graph theory, and the shortest path algorithm such as Dijkstra (Dijkstra) algorithm is adopted for solving.
In particular, assume a relative weight S between each two 11 =0.2,S 12 =0.3,S 13 =0.3,S 14 =0.2,S 21 =0.1,S 22 =0.5,S 23 =0.4,S 24 0.3, where S is fusion feature data lnp (D) in the probabilistic distance fusion algorithm or fusion feature data D in the feature distance direct fusion algorithm, and P is an overall probability of matching a plurality of vehicle features in a feature group, the specific calculation method is as follows:
first, from the smallest S 11 Start search, P min1 =S 11 . i-1 to j-1, and no longer matched; next smallest S ij Then P is min1 =S 11 +S 24 0.5. The first result is obtained.
Second, from the smallest S 12 Start of search, P min2 =S 12 . i-1 to j-2, and no longer matched; next smallest S ij Then P is min2 =S 12 +S 21 0.4. The second result is obtained.
Again, from the smallest S 13 Start search, P min3 =S 13 . i-1 to j-3, no longer matched; next smallest S ij Then P is min1 =P min3 =S 13 +S 21 0.4. The third result was obtained.
Finally, from the smallest S 14 Start search, P min4 =S 14 . i-1 to j-4, and no longer matched; next smallest S ij Then P is min4 =S 14 +S 21 0.3. A fourth result is obtained.
All possibilities are searched, and the fourth result is found to be the smallest, i-1 matches to j-4, and i-2 matches to j-1, and then the vehicle i-1, i-2 are all uniquely matched.
In still another aspect, the present invention further discloses a multi-site multi-vehicle matching server, including:
the acquisition module is used for acquiring road image data respectively acquired by a first station and a second station, wherein the second station is positioned in front of the first station in the driving direction;
the processing module is used for preprocessing the road image data acquired by the first station to obtain a first vehicle information set, and preprocessing the road image data acquired by the second station to obtain a second vehicle information set, wherein the first vehicle information set and the second vehicle information set respectively comprise a plurality of pieces of vehicle information, and the vehicle information comprises at least one vehicle characteristic;
the fusion module is used for carrying out feature fusion on the first vehicle information set and the second vehicle information set to generate a fusion feature data set;
and the matching module is used for matching the vehicle information corresponding to the first vehicle information set with the vehicle information corresponding to the second vehicle information set according to the fusion feature data set and obtaining a matching result.
In still another aspect, the invention further discloses a multi-station multi-vehicle matching system, which comprises at least 2 stations for acquiring road image data, and the stations are connected with the multi-station multi-vehicle matching server.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (8)

1. A multi-station multi-vehicle matching method is characterized by comprising the following steps:
acquiring road image data respectively acquired by a first station and a second station, wherein the second station is positioned in front of the first station in the driving direction;
preprocessing the road image data acquired by the first station to obtain a first vehicle information set, and preprocessing the road image data acquired by the second station to obtain a second vehicle information set, wherein the first vehicle information set and the second vehicle information set respectively comprise a plurality of pieces of vehicle information, and the vehicle information comprises at least one vehicle characteristic;
performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fusion feature data set;
determining relative weight between each two pieces of vehicle information in the first vehicle information set and each piece of vehicle information in the second vehicle information set, wherein the relative weight is the corresponding fusion characteristic data;
and matching by using a shortest path algorithm according to the relative weight between every two to obtain a matching result.
2. The matching method according to claim 1, characterized in that the vehicle feature is a vehicle color, a vehicle shape, a vehicle length, or a vehicle corner point.
3. The matching method according to claim 1, wherein the step of preprocessing the road image data collected by the first station to obtain a first vehicle information set, and the step of preprocessing the road image data collected by the second station to obtain a second vehicle information set specifically comprises:
selecting road image data collected by the first station in a first time period to carry out preprocessing to obtain a first vehicle information set;
and selecting a second time period according to the first time period, and acquiring road image data acquired by the second station in the second time period to carry out preprocessing to obtain a second vehicle information set.
4. The matching method according to claim 1, wherein the step of performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fused feature data set specifically comprises:
calculating a feature group between every two pieces of vehicle information in the first vehicle information set and every two pieces of vehicle information in the second vehicle information set, wherein the feature group comprises feature distances of at least two vehicle features between the two pieces of vehicle information or corresponding probabilities of the at least two vehicle features respectively;
and calculating fused feature data according to the feature groups, wherein the fused feature data corresponding to the feature groups form the fused feature data set.
5. The matching method according to claim 4, wherein, when the feature group includes feature distances of at least two vehicle features between two pieces of vehicle information, the step of calculating fused feature data from the feature group specifically includes:
calculating fusion feature data by adopting a feature distance direct fusion algorithm according to the feature group; or the like, or a combination thereof,
when the feature group includes corresponding probabilities of at least two vehicle features between two pieces of vehicle information, the step of calculating fused feature data according to the feature group specifically includes:
and calculating fused feature data by adopting a probability distance fusion algorithm according to the feature group.
6. The matching method according to claim 5, wherein the calculating of the fused feature data by using the feature distance direct fusion algorithm according to the feature group specifically comprises:
selecting ith vehicle information in a first vehicle information set, and selecting jth vehicle information from a second vehicle information set;
calculating the characteristic distance d of each vehicle characteristic between the ith vehicle information and the jth vehicle information 1 (i,j)、d 2 (i,j)、……d k (i,j)、……d K (i,j);
The fused feature data D ═ w 1 *d 1 (i,j)+w 2 *d 2 (i,j)+……+w k *d k (i,j)+……+w K *d K (i, j) wherein the weight w is 1 +w 2 +……+w k +……+w K =1。
7. A multi-site multi-vehicle matching server, comprising:
the acquisition module is used for acquiring road image data respectively acquired by a first station and a second station, wherein the second station is positioned in front of the first station in the driving direction;
the processing module is used for preprocessing the road image data acquired by the first station to obtain a first vehicle information set, and preprocessing the road image data acquired by the second station to obtain a second vehicle information set, wherein the first vehicle information set and the second vehicle information set respectively comprise a plurality of pieces of vehicle information, and the vehicle information comprises at least one vehicle characteristic;
the fusion module is used for performing feature fusion on the first vehicle information set and the second vehicle information set to generate a fusion feature data set;
the matching module is used for determining the relative weight between every two pieces of vehicle information in the first vehicle information set and every two pieces of vehicle information in the second vehicle information set, and the relative weight is the corresponding fusion characteristic data;
and matching by using a shortest path algorithm according to the relative weight between every two to obtain a matching result.
8. A multi-site multi-vehicle matching system comprising at least 2 sites for acquiring road image data, said sites being connected to the multi-site multi-vehicle matching server of claim 7.
CN201710779960.6A 2017-09-01 2017-09-01 Multi-site and multi-vehicle matching method, server and system Active CN109426791B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710779960.6A CN109426791B (en) 2017-09-01 2017-09-01 Multi-site and multi-vehicle matching method, server and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710779960.6A CN109426791B (en) 2017-09-01 2017-09-01 Multi-site and multi-vehicle matching method, server and system

Publications (2)

Publication Number Publication Date
CN109426791A CN109426791A (en) 2019-03-05
CN109426791B true CN109426791B (en) 2022-09-16

Family

ID=65513019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710779960.6A Active CN109426791B (en) 2017-09-01 2017-09-01 Multi-site and multi-vehicle matching method, server and system

Country Status (1)

Country Link
CN (1) CN109426791B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002342872A (en) * 2001-05-11 2002-11-29 Sumitomo Electric Ind Ltd Device and method for detecting abnormality of traffic flow
CN102081846A (en) * 2011-02-22 2011-06-01 交通运输部公路科学研究所 Expressway charge data track matching based traffic state recognition method
CN102289948A (en) * 2011-09-02 2011-12-21 浙江大学 Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN103020989A (en) * 2012-12-05 2013-04-03 河海大学 Multi-view target tracking method based on on-line scene feature clustering
CN103729892A (en) * 2013-06-20 2014-04-16 深圳市金溢科技有限公司 Vehicle positioning method and device and processor
CN104794731A (en) * 2015-05-12 2015-07-22 成都新舟锐视科技有限公司 Multi-target detection and tracking method for speed dome camera control strategy
CN105894542A (en) * 2016-04-26 2016-08-24 深圳大学 Online target tracking method and apparatus

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001344689A (en) * 2000-06-02 2001-12-14 Hiroshi Imai Vehicle correspondence device and method
CN1186750C (en) * 2002-06-03 2005-01-26 昆明利普机器视觉工程有限公司 A traffic flow detection system based on visual vehicle optical characteristic recognition and matching
US7519197B2 (en) * 2005-03-30 2009-04-14 Sarnoff Corporation Object identification between non-overlapping cameras without direct feature matching
CN102201165A (en) * 2010-03-25 2011-09-28 北京汉王智通科技有限公司 Monitoring system of vehicle traffic violation at crossing and method thereof
CN101916383B (en) * 2010-08-25 2013-03-20 浙江师范大学 Vehicle detecting, tracking and identifying system based on multi-camera
CN102200999B (en) * 2011-04-27 2012-10-10 华中科技大学 Method for retrieving similarity shape
CN102354389B (en) * 2011-09-23 2013-07-31 河海大学 Visual-saliency-based image non-watermark algorithm and image copyright authentication method
US9420427B2 (en) * 2013-12-05 2016-08-16 Deutsche Telekom Ag Method and system for tracking the whereabouts of people in urban settings
CN105096590B (en) * 2014-04-23 2019-07-26 株式会社日立制作所 Traffic information creating method and traffic information generating device
US9685079B2 (en) * 2014-05-15 2017-06-20 Conduent Business Services, Llc Short-time stopping detection from red light camera evidentiary photos
CN104298990B (en) * 2014-09-15 2017-12-22 西安电子科技大学 A kind of Fast Graphics matching based on skeleton drawing is with knowing method for distinguishing
CN104732485B (en) * 2015-04-21 2017-10-27 深圳市深图医学影像设备有限公司 The joining method and system of a kind of digital X-ray image
CN106960182B (en) * 2017-03-02 2018-12-14 云南大学 A kind of pedestrian's recognition methods again integrated based on multiple features

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002342872A (en) * 2001-05-11 2002-11-29 Sumitomo Electric Ind Ltd Device and method for detecting abnormality of traffic flow
CN102081846A (en) * 2011-02-22 2011-06-01 交通运输部公路科学研究所 Expressway charge data track matching based traffic state recognition method
CN102289948A (en) * 2011-09-02 2011-12-21 浙江大学 Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN103020989A (en) * 2012-12-05 2013-04-03 河海大学 Multi-view target tracking method based on on-line scene feature clustering
CN103729892A (en) * 2013-06-20 2014-04-16 深圳市金溢科技有限公司 Vehicle positioning method and device and processor
CN104794731A (en) * 2015-05-12 2015-07-22 成都新舟锐视科技有限公司 Multi-target detection and tracking method for speed dome camera control strategy
CN105894542A (en) * 2016-04-26 2016-08-24 深圳大学 Online target tracking method and apparatus

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A Deep Learning-Based Approach to Progressive Vehicle Re-identification for Urban Surveillance;Xinchen Liu 等;《ECCV 2016》;20160917;869-884 *
Multiple object tracking using A* association algorithm with dynamic weights;Zhenghao Xi 等;《Journal of Intelligent & Fuzzy Systems》;20151231;第29卷(第5期);2059-2072 *
一种多维特征融合的车辆对象同一性匹配方法;刘加运;《计算机技术与发展》;20160322;第26卷(第4期);167-176 *
基于运动检测的多车辆跟踪方法研究;单玉刚 等;《计算机测量与控制》;20170325;第25卷(第3期);24-28 *
跨摄像头目标跟踪综述;王先彬 等;《图形图像》;20170325;85-90 *

Also Published As

Publication number Publication date
CN109426791A (en) 2019-03-05

Similar Documents

Publication Publication Date Title
Wang et al. Lanenet: Real-time lane detection networks for autonomous driving
Liu et al. A survey on deep-learning approaches for vehicle trajectory prediction in autonomous driving
Son et al. Robust multi-lane detection and tracking using adaptive threshold and lane classification
JP5838901B2 (en) Object identification device and object identification method
CN106767873A (en) A kind of map-matching method based on space-time
WO2016034209A1 (en) Method and system for providing a dynamic ride sharing service
Xu et al. Secure and reliable transfer learning framework for 6G-enabled Internet of Vehicles
CN112347983B (en) Lane line detection processing method, lane line detection processing device, computer equipment and storage medium
CN110443849B (en) Target positioning method for double-current convolution neural network regression learning based on depth image
CN106441316A (en) Single-point road network matching method based on historical data
Guindel et al. Joint object detection and viewpoint estimation using CNN features
Li et al. Bus arrival time prediction based on mixed model
Marina et al. Deep Grid Net (DGN): A deep learning system for real-time driving context understanding
Kuzmin Classification and comparison of the existing SLAM methods for groups of robots
CN111126327B (en) Lane line detection method and system, vehicle-mounted system and vehicle
Gad et al. Real-time lane instance segmentation using SegNet and image processing
CN118296090A (en) Multi-dimensional space-time feature fusion track prediction method for automatic driving
Naufal et al. Weather image classification using convolutional neural network with transfer learning
CN109426791B (en) Multi-site and multi-vehicle matching method, server and system
CN113724293A (en) Vision-based intelligent internet public transport scene target tracking method and system
Panagiotaki et al. Sem-gat: Explainable semantic pose estimation using learned graph attention
CN113553975A (en) Pedestrian re-identification method, system, equipment and medium based on sample pair relation distillation
CN102831445A (en) Target detection method based on semantic Hough transformation and partial least squares
CN116740664A (en) Track prediction method and device
Fang et al. ContinuityLearner: Geometric continuity feature learning for lane segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant