CN112863183A - Traffic flow data fusion method and system - Google Patents

Traffic flow data fusion method and system Download PDF

Info

Publication number
CN112863183A
CN112863183A CN202110048773.7A CN202110048773A CN112863183A CN 112863183 A CN112863183 A CN 112863183A CN 202110048773 A CN202110048773 A CN 202110048773A CN 112863183 A CN112863183 A CN 112863183A
Authority
CN
China
Prior art keywords
sub
traffic flow
area
target object
reference point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110048773.7A
Other languages
Chinese (zh)
Other versions
CN112863183B (en
Inventor
袁伟评
王升哲
李纪奎
吴江流
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shangqiao Information Technology Co ltd
Original Assignee
Shenzhen Shangqiao Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shangqiao Information Technology Co ltd filed Critical Shenzhen Shangqiao Information Technology Co ltd
Priority to CN202110048773.7A priority Critical patent/CN112863183B/en
Publication of CN112863183A publication Critical patent/CN112863183A/en
Application granted granted Critical
Publication of CN112863183B publication Critical patent/CN112863183B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to a traffic flow data fusion method and a system, and the method specifically comprises the following steps: when the traffic flow of the preselected road section is detected, the millimeter wave detection technology and the video image detection technology are adopted simultaneously, then a display area corresponding to the video image detection technology and a display area corresponding to the millimeter wave detection technology are respectively divided into a plurality of first sub-areas and a plurality of second sub-areas in equal parts, then the second sub-areas corresponding to the first sub-areas are calibrated manually according to the real-time road conditions, after calibration is completed, a transverse offset value and a longitudinal offset value are calculated according to the real-time coordinates, the coordinates of the first reference points, the total number of rows of pixels contained in the first sub-areas and the total number of columns of pixels contained in the first sub-areas, and then the fusion position of a target object under the millimeter wave detection technology is obtained according to the transverse offset value, the longitudinal offset value, the coordinates of the second reference points and the length and width of the second. The fusion speed can be greatly improved, and the integrity of traffic flow information is improved.

Description

Traffic flow data fusion method and system
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a traffic flow data fusion method and system.
Background
Traffic flow information is an important parameter reflecting the traffic conditions of urban roads. On one hand, traffic planning departments and traffic management departments can accurately master the current road operation situation and conduct traffic dispersion and signal control in time; on the other hand, a scientific basis is provided for the follow-up road traffic development decision. Therefore, how to acquire traffic flow information becomes a key problem which must be solved; at present, one of the following methods is used:
the millimeter wave radar is slightly influenced by weather, has a long observation distance (more than 200 meters), and is easy to lose a target when in congestion or a vehicle is static;
video detection can accurately detect a target when a road is smooth and congested, but the target is greatly influenced by weather, particularly in haze days, snow days or rain days;
therefore, to acquire more complete traffic flow information, two technologies of millimeter wave radar and video detection need to be fused, but how to fuse two detection data is an unsolved problem.
Disclosure of Invention
The present invention provides a traffic flow data fusion method and a traffic flow data fusion system, aiming at the above-mentioned defects of the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows:
on one hand, the traffic flow data fusion method is provided, wherein the method comprises the following steps:
when the traffic flow of the preselected road section is detected, simultaneously adopting a millimeter wave detection technology and a video image detection technology;
respectively and equally dividing a display area corresponding to a video image detection technology and a display area corresponding to a millimeter wave detection technology into a plurality of first sub-areas and a plurality of second sub-areas, wherein the first sub-areas and the second sub-areas are rectangular and have the same total number, and taking an angular point with the maximum horizontal coordinate and the maximum vertical coordinate on the first sub-area as a first reference point and an angular point corresponding to the first reference point on the second sub-area as a second reference point;
manually calibrating a second sub-area corresponding to each first sub-area according to the real-time road condition;
after calibration is completed, calculating a horizontal offset value and a vertical offset value of the target object during fusion display according to the real-time coordinate of the target object under the video image detection technology, the coordinate of a first reference point of a first subregion where the target object is located, the total row number of pixels contained in the first subregion and the total column number of pixels contained in the first subregion;
and acquiring the fusion position of the target object under the millimeter wave detection technology according to the horizontal deviation value, the vertical deviation value, the coordinate of the second reference point corresponding to the current first reference point, and the length and width of the second sub-region.
Preferably, the step of calculating the horizontal offset value and the vertical offset value of the target object in the fusion display includes:
according to K1=A1/A2、K2=A3/A4Respectively obtaining the horizontal deviation values K1Longitudinal offset value K2Wherein A is2Is the total number of rows, A, of pixels in the first sub-region1Is along the left-right direction, the number of pixel columns, A, arranged in the current first sub-area of the target object4Is the total number of rows, A, of pixels comprised in the first sub-area3Is the pixel row number arranged in the current first sub-area along the direction from top to bottom;
the step of obtaining the fusion position of the target object under the millimeter wave detection technology comprises the following steps:
note that the abscissa of the fusion position is X1Ordinate is Y1
If K1=K21, then according to X1=X2、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein X2Is the abscissa, Y, of a second reference point corresponding to the current first reference point2The ordinate of a second datum point corresponding to the current first datum point;
if K1=1、K2Not equal to 1, then according to X1=X2、Y1=Y2+W*K2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein W is the width of the second sub-region;
if K1≠1、K21, then according to X1=X2-L*K1、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein L is the length of the second sub-region;
if K1 ≠ 1, K2 ≠ 1, then according to X1=X2-L*K1、Y1=Y2+W*K2And respectively acquiring the abscissa and the ordinate of the fusion position.
Preferably, the total number of copies of the first sub-region is 100 by 100.
On the other hand, a traffic flow data fusion system is provided, which is based on the traffic flow data fusion method, and includes:
the traffic flow detection system based on millimeter waves is used for detecting the traffic flow of the preselected road section;
the traffic flow detection system is based on the video image and is used for detecting the traffic flow of the preselected road section;
the processor is used for calculating a horizontal offset value and a vertical offset value of the target object during fusion display according to the real-time coordinate of the target object under the video image detection technology, the coordinate of a first reference point of a first sub-area where the target object is located, the total row number of pixels contained in the first sub-area and the total column number of pixels contained in the first sub-area; the system is also used for acquiring the fusion position of the target object under the millimeter wave detection technology according to the horizontal deviation value, the vertical deviation value, the coordinate of a second reference point corresponding to the current first reference point, and the length and width of the second sub-region; the traffic flow detection system based on the millimeter waves and the traffic flow detection system based on the video images are in communication connection with the processor.
Preferably, the processor is specifically configured to:
according to K1=A1/A2、K2=A3/A4Respectively obtaining lateral deviationShift value K1Longitudinal offset value K2Wherein A is2Is the total number of rows, A, of pixels in the first sub-region1Is along the left-right direction, the number of pixel columns, A, arranged in the current first sub-area of the target object4Is the total number of rows, A, of pixels comprised in the first sub-area3Is the pixel row number arranged in the current first sub-area along the direction from top to bottom; and
note that the abscissa of the fusion position is X1Ordinate is Y1
If K1=K21, then according to X1=X2、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein X2Is the abscissa, Y, of a second reference point corresponding to the current first reference point2The ordinate of a second datum point corresponding to the current first datum point;
if K1=1、K2Not equal to 1, then according to X1=X2、Y1=Y2+W*K2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein W is the width of the second sub-region;
if K1≠1、K21, then according to X1=X2-L*K1、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein L is the length of the second sub-region;
if K1 ≠ 1, K2 ≠ 1, then according to X1=X2-L*K1、Y1=Y2+W*K2And respectively acquiring the abscissa and the ordinate of the fusion position.
Preferably, the total number of copies of the first sub-region is 100 by 100.
The invention has the beneficial effects that: when the traffic flow of the preselected road section is detected, simultaneously adopting a millimeter wave detection technology and a video image detection technology, further, equally dividing a display area corresponding to the video image detection technology and a display area corresponding to the millimeter wave detection technology into a plurality of first sub-areas and a plurality of second sub-areas respectively, wherein the first sub-areas and the second sub-areas are rectangular and have the same total number of parts, taking the corner point with the maximum horizontal coordinate and the maximum vertical coordinate of the first sub-area as a first reference point, and the corner point corresponding to the first reference point on the second sub-area as a second reference point, further, manually calibrating the second sub-areas corresponding to the first sub-areas according to the real-time road conditions, and after the calibration is finished, manually calibrating the real-time coordinates of the target object, the coordinates of the first reference point of the first sub-area where the target object is located, and the total number of rows of pixels contained in the first sub-, And calculating a horizontal deviation value and a vertical deviation value of the target object during fusion display according to the total column number of pixels in the first sub-area, and further acquiring the fusion position of the target object under the millimeter wave detection technology according to the horizontal deviation value, the vertical deviation value, the coordinate of the second reference point corresponding to the current first reference point, and the length and width of the second sub-area. According to the method, the mapping relation between the second sub-area in the millimeter wave detection technology and all the first sub-areas in the video image detection technology is calibrated, the horizontal deviation value and the vertical deviation value of the target object during fusion display are calculated, the fusion speed can be greatly increased and the integrity of traffic flow information is improved when two kinds of data are fused subsequently, and the position of the target object in a Cartesian coordinate system in the millimeter wave detection technology is the real position of the target object.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be further described with reference to the accompanying drawings and embodiments, wherein the drawings in the following description are only part of the embodiments of the present invention, and for those skilled in the art, other drawings can be obtained without inventive efforts according to the accompanying drawings:
fig. 1 is a flowchart of a traffic flow data fusion method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a video image detection technique according to an embodiment of the present invention (K is a target object);
FIG. 3 is a schematic diagram corresponding to the millimeter wave detection technique when the method according to the first embodiment of the present invention is used (K is the target);
fig. 4 is a schematic composition diagram of a traffic flow data fusion system according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the following will clearly and completely describe the technical solutions in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without inventive step, are within the scope of the present invention.
Example one
The embodiment of the invention provides a traffic flow data fusion method, which comprises the following steps as shown in figures 1 to 3:
step S1: when the traffic flow of the preselected road section is detected, the millimeter wave detection technology and the video image detection technology are adopted at the same time.
In the implementation, two technologies are adopted for detection at the same time, so that the display area corresponding to the video image detection technology can be seen at the same time, and the display area corresponding to the millimeter wave detection technology can be seen, so that the calibration of the first sub-area can be performed subsequently. Under the millimeter wave detection technology, the position of the target body is calibrated by using a Cartesian coordinate with a radar as an origin, and under the video image detection technology, the position of the target body is calibrated by using the upper left corner of an image as the origin and the number of rows and columns of pixels in the image.
Step S2: the display area corresponding to the video image detection technology and the display area corresponding to the millimeter wave detection technology are respectively and equally divided into a plurality of first sub-areas 180 and a plurality of second sub-areas 181, wherein the first sub-areas and the second sub-areas are rectangular and have the same total number, the corner point with the maximum horizontal coordinate and the maximum vertical coordinate on the first sub-area is taken as a first reference point 182, and the corner point corresponding to the first reference point on the second sub-area is taken as a second reference point 183.
In the implementation, compared with the calibration of the mapping relation of each pixel point under the video image detection technology, the mapping relation of the whole area is equally divided, and then the mapping relation of each first sub-area is calibrated, so that the data storage capacity and the data processing capacity can be greatly reduced, the use cost is reduced, and the response speed is improved.
Step S3: and manually calibrating the second sub-area corresponding to each first sub-area according to the real-time road conditions.
In this embodiment, the same target object is displayed on a first sub-area and a second sub-area at the same time, and the first sub-area corresponds to the second sub-area.
Step S4: after calibration is completed, a horizontal offset value and a vertical offset value of the target object during fusion display are calculated according to the real-time coordinate of the target object under the video image detection technology, the coordinate of a first reference point of a first sub-area where the target object is located, the total row number of pixels contained in the first sub-area and the total column number of pixels contained in the first sub-area.
Step S5: and acquiring the fusion position of the target object under the millimeter wave detection technology according to the horizontal deviation value, the vertical deviation value, the coordinate of the second reference point corresponding to the current first reference point, and the length and width of the second sub-region.
The calculating of the horizontal deviation value and the vertical deviation value of the target object during fusion display specifically includes:
according to K1=A1/A2、K2=A3/A4Respectively obtaining the horizontal deviation values K1Longitudinal offset value K2Wherein A is2Is the total number of rows, A, of pixels in the first sub-region1Is along the left-right direction, the number of pixel columns, A, arranged in the current first sub-area of the target object4Is the total number of rows, A, of pixels comprised in the first sub-area3Is the pixel row number arranged in the current first sub-area along the direction from top to bottom;
the step of obtaining the fusion position of the target object under the millimeter wave detection technology comprises the following steps:
note that the abscissa of the fusion position is X1Ordinate is Y1
If K1=K21, then according to X1=X2、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein X2Is the abscissa, Y, of a second reference point corresponding to the current first reference point2The ordinate of a second datum point corresponding to the current first datum point;
if K1=1、K2Not equal to 1, X is determined according to X12、Y1=Y2+W*K2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein W is the width of the second sub-region;
if K1≠1、K21, then according to X1=X2-L*K1、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein L is the length of the second sub-region;
if K1 ≠ 1, K2 ≠ 1, then according to X1=X2-L*K1、Y1=Y2+W*K2And respectively acquiring the abscissa and the ordinate of the fusion position.
Wherein the total number of copies of the first sub-area is 100 by 100.
By way of example, the first sub-region comprises 20 rows and 20 columns of pixels, the object coordinates are (35,
35) if the coordinate of the first reference point of the first sub-area where the target object is located is (40, 40), A1Is 15, A2Is 15, K1=15/20=0.75,K2=15/20=0.75。
The method provided by the embodiment specifically comprises the following steps: when the traffic flow of the preselected road section is detected, simultaneously adopting a millimeter wave detection technology and a video image detection technology, further, equally dividing a display area corresponding to the video image detection technology and a display area corresponding to the millimeter wave detection technology into a plurality of first sub-areas and a plurality of second sub-areas respectively, wherein the first sub-areas and the second sub-areas are rectangular and have the same total number of parts, taking the corner point with the maximum horizontal coordinate and the maximum vertical coordinate of the first sub-area as a first reference point, and the corner point corresponding to the first reference point on the second sub-area as a second reference point, further, manually calibrating the second sub-areas corresponding to the first sub-areas according to the real-time road conditions, and after the calibration is finished, manually calibrating the real-time coordinates of the target object, the coordinates of the first reference point of the first sub-area where the target object is located, and the total number of rows of pixels contained in the first sub-, And calculating a horizontal deviation value and a vertical deviation value of the target object during fusion display according to the total column number of pixels in the first sub-area, and further acquiring the fusion position of the target object under the millimeter wave detection technology according to the horizontal deviation value, the vertical deviation value, the coordinate of the second reference point corresponding to the current first reference point, and the length and width of the second sub-area. According to the method, the mapping relation between the second sub-area in the millimeter wave detection technology and all the first sub-areas in the video image detection technology is calibrated, the horizontal deviation value and the vertical deviation value of the target object during fusion display are calculated, the fusion speed can be greatly increased and the integrity of traffic flow information is improved when two kinds of data are fused subsequently, and the position of the target object in a Cartesian coordinate system in the millimeter wave detection technology is the real position of the target object.
Example two
The embodiment of the invention provides a traffic flow data fusion system, based on the traffic flow data fusion method provided by the embodiment one, as shown in fig. 4, the system comprises:
a millimeter wave based traffic flow detection system 10 for detecting traffic flow of a preselected road segment;
a video image-based traffic flow detection system 11 for detecting a traffic flow of a preselected road segment;
the processor 12 is configured to calculate a horizontal offset value and a vertical offset value of the target object during fusion display according to a real-time coordinate of the target object under the video image detection technology, a coordinate of a first reference point of a first sub-region where the target object is located, a total number of rows of pixels included in the first sub-region, and a total number of columns of pixels included in the first sub-region; the system is also used for acquiring the fusion position of the target object under the millimeter wave detection technology according to the horizontal deviation value, the vertical deviation value, the coordinate of a second reference point corresponding to the current first reference point, and the length and width of the second sub-region; and the traffic flow detection system based on the millimeter waves and the traffic flow detection system based on the video images are in communication connection with the processor.
Wherein the processor is specifically configured to:
according to K1=A1/A2、K2=A3/A4Respectively obtaining the horizontal deviation values K1Longitudinal offset value K2Wherein A is2Is the total number of rows, A, of pixels in the first sub-region1Is along the left-right direction, the number of pixel columns, A, arranged in the current first sub-area of the target object4Is the total number of rows, A, of pixels comprised in the first sub-area3Is the pixel row number arranged in the current first sub-area along the direction from top to bottom; and
note that the abscissa of the fusion position is X1Ordinate is Y1
If K1=K21, then according to X1=X2、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein X2Is the abscissa, Y, of a second reference point corresponding to the current first reference point2The ordinate of a second datum point corresponding to the current first datum point;
if K1=1、K2Not equal to 1, then according to X1=X2、Y1=Y2+W*K2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein W is the width of the second sub-region;
if K1≠1、K21, then according to X1=X2-L*K1、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein L is the length of the second sub-region;
if K1 ≠ 1, K2 ≠ 1, then according to X1=X2-L*K1、Y1=Y2+W*K2And respectively acquiring the abscissa and the ordinate of the fusion position.
Wherein the total number of copies of the first sub-area is 100 by 100.
The specific working process of the system provided by the embodiment is as follows: when the traffic flow detection system based on the millimeter wave and the traffic flow detection system based on the video image simultaneously detect the traffic flow of the preselected road section, the display area corresponding to the video image detection technology and the display area corresponding to the millimeter wave detection technology are respectively equally divided into a plurality of first sub-areas and a plurality of second sub-areas, wherein the first sub-areas and the second sub-areas are rectangular and have the same total number of parts, an angular point with the maximum horizontal coordinate and the maximum vertical coordinate of the first sub-areas is taken as a first reference point, an angular point corresponding to the first reference point on the second sub-areas is taken as a second reference point, further, the second sub-areas corresponding to the first sub-areas are manually calibrated according to the real-time road conditions, and after the calibration is completed, the processor manually calibrates the real-time coordinate of the target object and the coordinate of the first reference point of the first sub-areas where the target object is located according to the, And the processor further acquires the fusion position of the target object under the millimeter wave detection technology according to the transverse offset value, the longitudinal offset value, the coordinate of a second reference point corresponding to the current first reference point, and the length and width of the second sub-region. According to the method, the mapping relation between the second sub-area in the millimeter wave detection technology and all the first sub-areas in the video image detection technology is calibrated, the horizontal deviation value and the vertical deviation value of the target object during fusion display are calculated, the fusion speed can be greatly increased and the integrity of traffic flow information is improved when two kinds of data are fused subsequently, and the position of the target object in a Cartesian coordinate system in the millimeter wave detection technology is the real position of the target object.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (6)

1. A traffic flow data fusion method is characterized by comprising the following steps:
when the traffic flow of the preselected road section is detected, simultaneously adopting a millimeter wave detection technology and a video image detection technology;
respectively and equally dividing a display area corresponding to a video image detection technology and a display area corresponding to a millimeter wave detection technology into a plurality of first sub-areas and a plurality of second sub-areas, wherein the first sub-areas and the second sub-areas are rectangular and have the same total number, and taking an angular point with the maximum horizontal coordinate and the maximum vertical coordinate on the first sub-area as a first reference point and an angular point corresponding to the first reference point on the second sub-area as a second reference point;
manually calibrating a second sub-area corresponding to each first sub-area according to the real-time road condition;
after calibration is completed, calculating a horizontal offset value and a vertical offset value of the target object during fusion display according to the real-time coordinate of the target object under the video image detection technology, the coordinate of a first reference point of a first subregion where the target object is located, the total row number of pixels contained in the first subregion and the total column number of pixels contained in the first subregion;
and acquiring the fusion position of the target object under the millimeter wave detection technology according to the horizontal deviation value, the vertical deviation value, the coordinate of the second reference point corresponding to the current first reference point, and the length and width of the second sub-region.
2. The traffic flow data fusion method according to claim 1, wherein the step of calculating a horizontal offset value and a vertical offset value of the object at the time of fusion display includes:
according to K1=A1/A2、K2=A3/A4Respectively obtaining the horizontal deviation values K1Longitudinal offset value K2Wherein A is2Is the total number of rows, A, of pixels in the first sub-region1Is along the left-right direction, the number of pixel columns, A, arranged in the current first sub-area of the target object4Is the total number of rows, A, of pixels comprised in the first sub-area3Is along the direction from top to bottom, the target object is in the current first sub-areaThe number of rows of pixels arranged;
the step of obtaining the fusion position of the target object under the millimeter wave detection technology comprises the following steps:
note that the abscissa of the fusion position is X1Ordinate is Y1
If K1=K21, then according to X1=X2、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein X2Is the abscissa, Y, of a second reference point corresponding to the current first reference point2The ordinate of a second datum point corresponding to the current first datum point;
if K1=1、K2Not equal to 1, then according to X1=X2、Y1=Y2+W*K2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein W is the width of the second sub-region;
if K1≠1、K21, then according to X1=X2-L*K1、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein L is the length of the second sub-region;
if K1 ≠ 1, K2 ≠ 1, then according to X1=X2-L*K1、Y1=Y2+W*K2And respectively acquiring the abscissa and the ordinate of the fusion position.
3. The traffic flow data fusion method of claim 1 wherein the total number of copies in the first sub-area is 100 by 100.
4. A traffic flow data fusion system based on the traffic flow data fusion method according to any one of claims 1 to 3, characterized by comprising:
the traffic flow detection system based on millimeter waves is used for detecting the traffic flow of the preselected road section;
the traffic flow detection system is based on the video image and is used for detecting the traffic flow of the preselected road section;
the processor is used for calculating a horizontal offset value and a vertical offset value of the target object during fusion display according to the real-time coordinate of the target object under the video image detection technology, the coordinate of a first reference point of a first sub-area where the target object is located, the total row number of pixels contained in the first sub-area and the total column number of pixels contained in the first sub-area; the system is also used for acquiring the fusion position of the target object under the millimeter wave detection technology according to the horizontal deviation value, the vertical deviation value, the coordinate of a second reference point corresponding to the current first reference point, and the length and width of the second sub-region; the traffic flow detection system based on the millimeter waves and the traffic flow detection system based on the video images are in communication connection with the processor.
5. The traffic flow data fusion system of claim 4, wherein the processor is specifically configured to:
according to K1=A1/A2、K2=A3/A4Respectively obtaining the horizontal deviation values K1Longitudinal offset value K2Wherein A is2Is the total number of rows, A, of pixels in the first sub-region1Is along the left-right direction, the number of pixel columns, A, arranged in the current first sub-area of the target object4Is the total number of rows, A, of pixels comprised in the first sub-area3Is the pixel row number arranged in the current first sub-area along the direction from top to bottom; and
note that the abscissa of the fusion position is X1Ordinate is Y1
If K1=K21, then according to X1=X2、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein X2Is the abscissa, Y, of a second reference point corresponding to the current first reference point2The ordinate of a second datum point corresponding to the current first datum point;
if K1=1、K2Not equal to 1, then according to X1=X2、Y1=Y2+W*K2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein WIs the width of the second subregion;
if K1≠1、K21, then according to X1=X2-L*K1、Y1=Y2Respectively acquiring the abscissa and the ordinate of the fusion position, wherein L is the length of the second sub-region;
if K1 ≠ 1, K2 ≠ 1, then according to X1=X2-L*K1、Y1=Y2+W*K2And respectively acquiring the abscissa and the ordinate of the fusion position.
6. The traffic flow data fusion system of claim 4 wherein the total number of copies in the first sub-area is 100 by 100.
CN202110048773.7A 2021-01-14 2021-01-14 Traffic flow data fusion method and system Active CN112863183B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110048773.7A CN112863183B (en) 2021-01-14 2021-01-14 Traffic flow data fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110048773.7A CN112863183B (en) 2021-01-14 2021-01-14 Traffic flow data fusion method and system

Publications (2)

Publication Number Publication Date
CN112863183A true CN112863183A (en) 2021-05-28
CN112863183B CN112863183B (en) 2022-04-08

Family

ID=76005997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110048773.7A Active CN112863183B (en) 2021-01-14 2021-01-14 Traffic flow data fusion method and system

Country Status (1)

Country Link
CN (1) CN112863183B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141508A1 (en) * 2008-12-10 2010-06-10 Us Government As Represented By The Secretary Of The Army Method and system for forming an image with enhanced contrast and/or reduced noise
CN103324936A (en) * 2013-05-24 2013-09-25 北京理工大学 Vehicle lower boundary detection method based on multi-sensor fusion
CN106340046A (en) * 2016-08-19 2017-01-18 中国电子科技集团公司第二十八研究所 Radar target location analysis method based on graphic geographic information
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video
CN110390695A (en) * 2019-06-28 2019-10-29 东南大学 The fusion calibration system and scaling method of a kind of laser radar based on ROS, camera
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
CN110775052A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 Automatic parking method based on fusion of vision and ultrasonic perception
CN110929692A (en) * 2019-12-11 2020-03-27 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method and device based on multi-sensor information fusion
CN111652097A (en) * 2020-05-25 2020-09-11 南京莱斯电子设备有限公司 Image millimeter wave radar fusion target detection method
CN111914715A (en) * 2020-07-24 2020-11-10 廊坊和易生活网络科技股份有限公司 Intelligent vehicle target real-time detection and positioning method based on bionic vision
CN112101361A (en) * 2020-11-20 2020-12-18 深圳佑驾创新科技有限公司 Target detection method, device and equipment for fisheye image and storage medium
CN112215306A (en) * 2020-11-18 2021-01-12 同济大学 Target detection method based on fusion of monocular vision and millimeter wave radar

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100141508A1 (en) * 2008-12-10 2010-06-10 Us Government As Represented By The Secretary Of The Army Method and system for forming an image with enhanced contrast and/or reduced noise
CN103324936A (en) * 2013-05-24 2013-09-25 北京理工大学 Vehicle lower boundary detection method based on multi-sensor fusion
CN106340046A (en) * 2016-08-19 2017-01-18 中国电子科技集团公司第二十八研究所 Radar target location analysis method based on graphic geographic information
CN109543600A (en) * 2018-11-21 2019-03-29 成都信息工程大学 A kind of realization drivable region detection method and system and application
CN109615870A (en) * 2018-12-29 2019-04-12 南京慧尔视智能科技有限公司 A kind of traffic detection system based on millimetre-wave radar and video
CN110390695A (en) * 2019-06-28 2019-10-29 东南大学 The fusion calibration system and scaling method of a kind of laser radar based on ROS, camera
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision
CN110775052A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 Automatic parking method based on fusion of vision and ultrasonic perception
CN110929692A (en) * 2019-12-11 2020-03-27 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method and device based on multi-sensor information fusion
CN111652097A (en) * 2020-05-25 2020-09-11 南京莱斯电子设备有限公司 Image millimeter wave radar fusion target detection method
CN111914715A (en) * 2020-07-24 2020-11-10 廊坊和易生活网络科技股份有限公司 Intelligent vehicle target real-time detection and positioning method based on bionic vision
CN112215306A (en) * 2020-11-18 2021-01-12 同济大学 Target detection method based on fusion of monocular vision and millimeter wave radar
CN112101361A (en) * 2020-11-20 2020-12-18 深圳佑驾创新科技有限公司 Target detection method, device and equipment for fisheye image and storage medium

Also Published As

Publication number Publication date
CN112863183B (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN108519605B (en) Road edge detection method based on laser radar and camera
CN110852342B (en) Road network data acquisition method, device, equipment and computer storage medium
CN103617412B (en) Real-time lane line detection method
CN111652097B (en) Image millimeter wave radar fusion target detection method
CN109284674B (en) Method and device for determining lane line
CN110415277B (en) Multi-target tracking method, system and device based on optical flow and Kalman filtering
CN108734105B (en) Lane line detection method, lane line detection device, storage medium, and electronic apparatus
CN109791598A (en) The image processing method of land mark and land mark detection system for identification
CN111932537B (en) Object deformation detection method and device, computer equipment and storage medium
CN107590470B (en) Lane line detection method and device
CN108280450A (en) A kind of express highway pavement detection method based on lane line
CN112307810B (en) Visual positioning effect self-checking method and vehicle-mounted terminal
JP6552448B2 (en) Vehicle position detection device, vehicle position detection method, and computer program for vehicle position detection
CN109583365A (en) Method for detecting lane lines is fitted based on imaging model constraint non-uniform B-spline curve
CN106127787A (en) A kind of camera calibration method based on Inverse projection
US20220254062A1 (en) Method, device and storage medium for road slope predicating
CN109753841B (en) Lane line identification method and device
CN104167109A (en) Detection method and detection apparatus for vehicle position
CN115797454B (en) Multi-camera fusion sensing method and device under bird's eye view angle
CN108256445A (en) Method for detecting lane lines and system
CN111681259A (en) Vehicle tracking model establishing method based on Anchor-free mechanism detection network
WO2022166606A1 (en) Target detection method and apparatus
CN109410598B (en) Traffic intersection congestion detection method based on computer vision
CN112817006B (en) Vehicle-mounted intelligent road disease detection method and system
CN112255604B (en) Method and device for judging accuracy of radar data and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant