CN113496528B - Method and device for calibrating position of visual detection target in fixed traffic roadside scene - Google Patents

Method and device for calibrating position of visual detection target in fixed traffic roadside scene Download PDF

Info

Publication number
CN113496528B
CN113496528B CN202111040735.3A CN202111040735A CN113496528B CN 113496528 B CN113496528 B CN 113496528B CN 202111040735 A CN202111040735 A CN 202111040735A CN 113496528 B CN113496528 B CN 113496528B
Authority
CN
China
Prior art keywords
point
coordinate position
image
reference target
actual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111040735.3A
Other languages
Chinese (zh)
Other versions
CN113496528A (en
Inventor
彭贵福
黄利雄
张国壁
舒键
张永斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zhongtianyun Technology Co Ltd
Original Assignee
Hunan Zhongtianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhongtianyun Technology Co Ltd filed Critical Hunan Zhongtianyun Technology Co Ltd
Priority to CN202111040735.3A priority Critical patent/CN113496528B/en
Publication of CN113496528A publication Critical patent/CN113496528A/en
Application granted granted Critical
Publication of CN113496528B publication Critical patent/CN113496528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for calibrating a position of a visual detection target in a scene of fixed traffic roadside, wherein the method comprises the following steps: s01, acquiring the relative distance between two reference target points and the corresponding pixel coordinate position in an image to be calibrated acquired by a visual sensor to be calibrated, fixedly arranging the visual sensor to be calibrated at a specified position of a road side, and determining the actual coordinate position of a central point in a traffic road surface within a visual detection range according to the acquired relative distance and the pixel coordinate position; s02, acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image to be calibrated according to the actual coordinate position of the target point to be calibrated and the determined actual coordinate position of the central point and a pre-constructed coordinate position relation model between the target point and the central point to finish calibration. The invention has the advantages of simple realization operation, low cost, high efficiency and precision and the like.

Description

Method and device for calibrating position of visual detection target in fixed traffic roadside scene
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a method and a device for calibrating a position of a visual detection target in a scene of fixed traffic roadside.
Background
With the rapid increase of the number of road transportation tools, the road transportation situation is more and more complex, not only the traffic jam situation is endless, but also accidents frequently occur, the traffic pressure borne by the traditional traffic management system is increasingly heavy, and the intelligent traffic system can better solve the situations of road transportation. The intelligent traffic is based on Intelligent Traffic (ITS), and technologies such as internet of things, cloud computing, internet, artificial intelligence, automatic control, mobile internet and the like are fully utilized in the traffic field, so that a traffic system has the capabilities of perception, interconnection, analysis, prediction, control and the like in a region, a city and even a larger space-time range, and the operation efficiency and the management level of the traffic system are improved.
For realizing intelligent control in the intelligent traffic, road side units are generally required to be arranged on two sides of a traffic road surface, as shown in fig. 1, the road side units comprise various sensors, such as radars, vision sensors and the like, and the road side units acquire vehicle target information on the road in real time. The basis for realizing intelligent traffic is collection and fusion of data of multiple sensors, information collected by multiple sensors is collected and calibrated to a unified coordinate system, and for example, target information obtained by millimeter wave radar detection is calibrated and fused to a target in camera imaging, so that real-time multi-dimensional monitoring and behavior analysis can be performed on various elements (including people, vehicles, animals and the like) in a traffic scene. To realize effective fusion of multi-sensor information, the most important thing is to calibrate and register the corresponding targets detected by each sensor so as to unify the targets in the same coordinate system, so that only an accurate and efficient calibration method can ensure accurate fusion of the target information detected by each sensor.
The sensors in the road side units in the traffic scene are fixed, namely once the sensors are installed, the detection environment and the traffic scene are fixed and not changed dynamically, so that after calibration is completed once, the actual position coordinates of each point on the road surface in the traffic scene and the pixel coordinates in the image are in one-to-one correspondence and are fixed and not changed. In the prior art, most of the sensors are calibrated by aiming at information fusion of mobile sensors such as vehicle-mounted multiple sensors, and the calibration method of the mobile sensors is not suitable for the calibration of the fixed sensors. The traditional calibration for the camera is usually realized by means of auxiliary calibration equipment, the calibration mode is complex and low in calibration efficiency, and the implementation cost is high due to the fact that special auxiliary calibration equipment is required, and when the calibration mode is applied to calibration of the sensor under a traffic roadside scene, calibration of a traffic road and a calibration image is quickly and accurately realized.
The chinese patent application CN202010401379.2 discloses a camera calibration method, which is characterized in that a calibration board with a specific form is arranged, a plurality of calibration images with the calibration board shot by a camera are obtained, a coding region and a non-coding region of each calibration image are detected, coding information of the coding region and the non-coding region of each calibration image is extracted, and then the coding information is matched with a preset calibration board, and calibration data of the camera is calculated. The scheme needs to complete calibration by means of a calibration plate in a specific form and depending on identification and matching of an image coding region, is complex to realize and high in cost, and is not suitable for calibration between a traffic road surface and a calibration image under a traffic road side scene.
In summary, in the prior art, the characteristic of the fixed traffic roadside scene is not considered in the sensor calibration manner, and it is difficult to quickly and accurately implement the calibration of the traffic road surface and the calibration image, so it is urgently needed to provide a visual target position information calibration applicable to the fixed traffic roadside scene, so that the quick and accurate calibration of the traffic road surface and the calibration image can be realized.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the method and the device for calibrating the position of the visual detection target under the fixed traffic roadside scene, which have the advantages of simple implementation method, low cost, high efficiency and high precision, and can fully utilize the characteristics of the fixed traffic roadside scene to realize the precise calibration between the traffic road surface and the calibration image of the visual sensor.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a visual detection target position calibration method under a fixed traffic roadside scene comprises the following steps:
s01, determining the position of a center point: acquiring the relative distance between two reference target points and corresponding pixel coordinate positions in an image to be calibrated acquired by a visual sensor to be calibrated, wherein the visual sensor to be calibrated is fixedly arranged at a specified position of a roadside, the reference target points are positioned on a traffic road surface in a visual detection range, and the actual coordinate position of a central point in the traffic road surface in the visual detection range is determined according to the acquired relative distance and the pixel coordinate positions;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image to be calibrated according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point to finish calibration.
Further, in step S01, a first relationship model between the relative distance between the two reference target points, the pixel distance between the two reference target points and the central point in the image, and the actual coordinate position of the central point is pre-constructed, and after the relative distance between the two reference target points and the pixel coordinate position are obtained, the actual coordinate position of the central point is determined by using the first relationship model.
Further, if the two reference target points are all interior points, that is, the Y-axis distance between the two reference target points is smaller than the ordinate value of the center point, the expression of the first relationship model is as follows:
Figure GDA0003356525670000031
Figure GDA0003356525670000032
E1=tan a1-tan b1
F1=tan a1+tan b1
G1=tan a1×tan b1
Figure GDA0003356525670000033
Figure GDA0003356525670000034
Figure GDA0003356525670000035
Figure GDA0003356525670000036
h is the installation height of the image acquisition equipment, f is the focal length value of the image acquisition equipment, dy1 'and dy 1' are the pixel distances between two reference target points which are inner points and a central point in an image respectively, and yc is the actual coordinate position of the central point;
if the two reference target points are both outer points, that is, the Y-axis distance of the two reference target points is greater than the ordinate value of the center point, the expression of the first relationship model is as follows:
yc=h×tan xc2
Figure GDA0003356525670000041
E2=tan a2-tan b2
F2=tan a2+tan b2
G2=tan a2×tan b2
Figure GDA0003356525670000042
Figure GDA0003356525670000043
Figure GDA0003356525670000044
Figure GDA0003356525670000045
Figure GDA0003356525670000046
wherein dy 2' and dy2 ″ are pixel distances between two reference target points which are outer points and a central point in an image respectively, and yc is an actual coordinate position of the central point on a road surface.
Further, in the step S01, two reference target points are respectively taken from the virtual lines of two adjacent lanes on the traffic road surface, and the relative distance between the two reference target points is determined according to the distance between the virtual lines of the two adjacent lanes; or in step S01, two reference points are taken from both sides of the traffic road, and the distance between the two reference points is measured to obtain the relative distance between the two reference points.
Further, the coordinate position relation model is constructed according to the pixel coordinate position of the target point in the image, the actual coordinate position of the target point and the actual coordinate position of the central point.
A visual detection target position calibration method under a fixed traffic roadside scene comprises the following steps:
s01, determining the position of a center point: acquiring an actual coordinate position of a reference target point and a corresponding pixel coordinate position of the reference target point in an image, wherein the visual sensor to be calibrated is fixedly arranged at a specified position of a road side, the reference target point is positioned on a traffic road surface in a visual detection range of the visual sensor to be calibrated, and the actual coordinate position of a central point is determined according to the acquired coordinate position;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point, so as to complete the calibration of the target point to be calibrated in the image.
Further, in step S01, a second relationship model between the actual coordinate position of the target point, the pixel coordinate position of the target point in the image, and the actual coordinate position of the central point is pre-constructed, and after the actual coordinate position and the pixel coordinate position of the reference target point are obtained, the actual coordinate position of the central point is determined using the second relationship model.
Further, in step S01, when the reference target point is an interior point, that is, the Y-axis distance of the reference target point is smaller than the ordinate value of the center point, the expression of the second relationship model is:
Figure GDA0003356525670000051
wherein h is the installation height of the image acquisition equipment, f is the focal length value of the image acquisition equipment, the pixel distance between the vertical direction and the central point of the reference target point at dy1 point in the image, y1 is the normal distance in the actual coordinate position of the reference target point at the inner point, and yc is the actual coordinate position of the central point in the vertical direction;
when the reference target point is an external point, that is, the Y-axis distance of the reference target point is greater than the ordinate value of the center point, the expression of the second relationship model is as follows:
Figure GDA0003356525670000052
where dy2 is the pixel distance between the vertical direction of the reference target point of the outer point and the center point in the image, and y2 is the normal distance in the actual coordinate position of the reference target point of the outer point.
A visual detection target position calibration device under a fixed traffic roadside scene comprises a processor and a memory, wherein the memory is used for storing a computer program, the processor is used for executing the computer program, and the processor is used for executing the computer program to execute the method.
A computer-readable storage medium having stored thereon a computer program which, when executed, implements the method as described above.
Compared with the prior art, the invention has the advantages that:
1. according to the invention, the actual coordinate position of the central point is determined according to the reference point, and then the pixel coordinate position of the target point to be calibrated in the image is calculated according to the corresponding relation by utilizing the coordinate position, so that the method is simple and convenient to operate and low in cost, an additional auxiliary calibration tool is not required, the accurate calibration of the traffic road and the image can be realized due to the fact that the characteristic of a fixed traffic roadside scene is fully utilized, and after the absolute coordinates of various targets on the traffic road are obtained, the one-to-one corresponding calibration of the pixel coordinates and the actual distances of the relevant road parts in the visual image can be realized under the condition of ensuring the calibration accuracy and the operation simplicity.
2. The method can be applied to various different fixed traffic scenes, and can realize one-to-one corresponding calibration of the pixel coordinates and the actual distances of the relevant road surface parts in various different traffic scenes on the premise of ensuring the calibration accuracy and the operation simplicity after extracting the absolute coordinates of various targets on the traffic road surface.
3. According to the method, the steps of fusion calibration are simplified, manual calibration of the actual position coordinates of the road surface central point is not needed, danger caused by operation in the road surface due to manual calibration is avoided, dependence on manual installation operation is reduced, and the process of mapping and calibrating the actual position coordinates of all points of the road surface of a traffic scene to the pixel coordinates in the imaged road surface can be efficiently and accurately completed.
Drawings
FIG. 1 is a schematic diagram of a vehicle target detection in a traffic-side scene.
Fig. 2 is a schematic flow chart illustrating an implementation of the visual inspection target position calibration method in a fixed traffic roadside scene in accordance with embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of a principle of a calibration model in a traffic road side scene constructed in embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of the principle of projection of the target point in the first case (inner point) in embodiment 1 of the present invention.
Fig. 5 is a schematic diagram of the principle of projection of the target point in the second case (outer point) in embodiment 1 of the present invention.
Fig. 6 is a schematic flow chart illustrating an implementation of calibrating a position of a visual detection target in a scene of fixed traffic roadside in embodiment 2 of the present invention.
Fig. 7 is a schematic flow chart illustrating an implementation of calibrating a position of a visual detection target in a scene of fixed traffic roadside in embodiment 3 of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
Example 1:
as shown in fig. 2, the method for calibrating the position of the visual detection target in the fixed traffic roadside scene in the embodiment includes the following steps:
s01, determining the position of a center point: acquiring the relative distance between two reference target points and corresponding pixel coordinate positions in an image to be calibrated acquired by a visual sensor to be calibrated, wherein the visual sensor to be calibrated is fixedly arranged at a specified position of a roadside, the reference target points are positioned on a traffic road surface in the visual detection range of the visual sensor to be calibrated, and the actual coordinate position of a central point in the traffic road surface in the visual detection range is determined according to the acquired relative distance and the pixel coordinate positions;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image to be calibrated according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point, so as to complete the calibration of the target point to be calibrated in the image.
Considering that in a scene of fixed traffic road side, after a vision sensor (such as a camera) is fixedly installed, as shown in fig. 1, the imaging of the sensor is simplified into small-hole imaging, the sensor is fixed at a position with a height h and is shot downwards at a certain angle, each point on a road surface corresponds to a pixel of a picture obtained by shooting with the camera one by one, wherein the imaging correspondence of the central point of the detection range on the road surface is the central position of the image coordinate, the relative position relationship between the central point and the target point in the actual process is corresponding to the relative position relationship between the central point and the target point in the image (according to the proportional mapping), i.e. from the position of the centre point, a determined reference point can be formed, after determining the actual coordinate position of the centre point (the actual coordinate position on the traffic surface), by means of the coordinate position of the central point, the position of the target point to be calibrated on the road surface can be correspondingly determined in the image.
The characteristic of calibration under the scene of fixed traffic roadside is considered in the embodiment, the actual coordinate position of the central point is determined according to the reference point, and then the coordinate position is utilized to calculate the pixel coordinate position of the target point to be calibrated in the image according to the corresponding relation, so that the method is simple and convenient to operate, low in cost and free of using an additional auxiliary calibration tool, and the characteristic of the scene of fixed traffic roadside is fully utilized, so that the accurate calibration of the traffic road surface and the image can be realized, and after the absolute coordinates of various targets on the traffic road surface are obtained, the one-to-one calibration of the pixel coordinates and the actual distance of the relevant road surface part in the visual image can be realized under the condition of ensuring the calibration accuracy and the operation simplicity.
In the embodiment, after the vision sensor to be calibrated is fixedly installed, the vision sensor is firstly corrected and calibrated with the internal reference matrix, and the vision sensor is specifically a camera. The camera has errors more or less after being put into use, the camera can be corrected by adopting a classical Zhang calibration method, and an internal reference matrix is obtained after correction:
Figure GDA0003356525670000071
in practice above fxAnd fyVery close, the mean of the two can be taken as the focus value, i.e.:
Figure GDA0003356525670000072
wherein x iscAnd ycRespectively, the horizontal and vertical centers of the resulting image.
After the internal reference of the sensor is calibrated, the coordinate relationship between the target point and the central point needs to be further determined. In the following, taking the specific application embodiment as an example, the coordinate relationship between the target point and the central point in the fixed traffic road side scene is analyzed in detail.
As shown in fig. 3, in a scene of a fixed traffic road side, imaging of a camera is simplified into pinhole imaging, the camera is fixed at a position with a height h and is shot downwards at a certain angle, each point on a traffic road surface corresponds to a pixel of a picture obtained by shooting with the camera one by one, wherein a Y-axis direction represents a normal distance direction of the camera, an X-axis direction represents a horizontal distance direction, a Z-axis direction represents a height distance direction, a D-axis is defined as a central point, imaging of the D-axis direction corresponds to a central position of an image coordinate, a target point P1 represents an inner point, the inner point represents a target with a Y-axis distance smaller than the central point, a target point P2 represents an outer point, and the outer point represents a target with a Y-axis distance larger than the central point.
According to the position relationship between the target point and the central point, the following two situations can be analyzed:
(1) in the first case: when the detected target point is an inner point P1, P1 is projected to the Y axis and the X axis respectively to obtain a projection diagram as shown in fig. 4, where the actual coordinates of the inner point P1 are (X1, Y1), i.e., the normal distance is Y1, the horizontal distance is X1, the angle formed by the P1 point and the Z axis is α, the angle formed by the P1 and the center point D is β, and the angle formed by the D point and the Y axis is γ, so as to obtain the following equation relationship:
Figure GDA0003356525670000081
Figure GDA0003356525670000082
Figure GDA0003356525670000083
Figure GDA0003356525670000084
from the above formula, one can obtain:
Figure GDA0003356525670000085
then there are:
Figure GDA0003356525670000086
where dy1 denotes a pixel distance between the vertical direction and the center point of the target point P1 in the image, and f denotes a focal length value of the camera.
And because according to the similar theorem of triangle, can get:
Figure GDA0003356525670000091
where dx1 represents the pixel distance between the horizontal direction and the center point of the target point P1 in the image, ys1 represents the distance between the target point P1 in the imaging plane and the Z axis, which can be expressed as follows:
Figure GDA0003356525670000092
then it can be obtained:
Figure GDA0003356525670000093
as can be seen from (6) and (9), the pixel coordinates of the target point P1 in the image are only related to the actual coordinates (x1, y1) of the target point P1, the actual coordinates (xc, yc) of the center point, the installation height h, and the focal length value f, and if the installation height h and the focal length value f are known quantities, the coordinates of the center point can be determined, and the pixel coordinates corresponding to the arbitrary interior point in the image can be calculated according to equations (6) and (9) based on the actual coordinate position of the arbitrary interior point, thereby completing the calibration of the arbitrary interior point in the image.
(2) In the second case: when the detected target point is an outer point P2, P2 is projected to the Y axis and the X axis respectively to obtain a projection diagram as shown in fig. 5, and the actual coordinates of the outer point P2 are (X2, Y2), i.e. the normal distance is Y2, the horizontal distance is X2, the angle formed by the P2 point and the Z axis is θ, the angle formed by the P2 and the center point D is δ, and the angle formed by the D point and the Y axis is λ, the following equation relationship can be obtained:
Figure GDA0003356525670000101
Figure GDA0003356525670000102
Figure GDA0003356525670000103
Figure GDA0003356525670000104
from the above formula, one can obtain:
Figure GDA0003356525670000105
then there are:
Figure GDA0003356525670000106
where dy2 denotes a pixel distance between the vertical direction of the target point P2 and the center point among the images, and f denotes a focal length value of the camera.
Meanwhile, according to the similarity theorem of triangles, the following can be obtained:
Figure GDA0003356525670000107
where dx2 represents the pixel distance between the horizontal direction and the center point of the target point P2 in the image, ys2 represents the distance between the target point and the Z-axis in the imaging plane, as follows:
Figure GDA0003356525670000108
further, it is possible to obtain:
Figure GDA0003356525670000111
as can be seen from the above equations (15) and (18), as in the principle of interior points, the pixel coordinates of the target point P2 in the image are related only to the actual coordinates (x2, y2) of the target point P2, the actual coordinates (xc, yc) of the center point, the installation height h, and the focal length value f, and the installation height h and the focal length value f are known quantities, and if the coordinates of the center point can be determined, the corresponding pixel coordinates of the interior point in the image can be calculated according to the equations (15) and (18) based on the actual coordinate position of any exterior point, and the calibration of any exterior point in the image is completed.
From the above analysis, no matter the internal point or the external point, the actual coordinate between any target point and the central point and the pixel coordinate in the image have a definite association relationship, and if the actual coordinate position of the central point can be obtained, the imaging position of the target point in the calibration picture can be calibrated based on the actual coordinate of the target point by means of the actual coordinate position of the central point. Therefore, the determination of the actual coordinate position of the central point is a key for realizing calibration, but in an actual traffic scene, the traffic environment is often complex, no matter in an expressway or an urban road, because the central point D is generally located in the middle of the road surface, the direct manual calibration of the central point in the middle of the road surface is time-consuming, labor-consuming and extremely dangerous, and the manual calibration mode is easy to generate errors, so that the calibration precision cannot be ensured. In view of the above problems, the present embodiment implements determination of the position coordinates of the central point D by means of two reference target points, so as to reduce implementation complexity, ensure implementation safety and reliability, and avoid errors caused by manual acquisition.
In step S01 of this embodiment, a first relationship model between the relative distance between the two reference target points, the pixel distance between the two reference target points and the central point in the image, and the actual coordinate position of the central point is pre-constructed, and after the relative distance between the two reference target points and the pixel coordinate position are obtained, the actual coordinate position of the central point is determined by using the first relationship model. By means of a coordinate mapping relation formed between two target points and a central point, only the relative distance between the two corresponding points in the visual imaging needs to be acquired, the actual coordinates of the central point can be determined according to the relative distance information of the two points without determining the actual positions of the two points, so that the whole calibration process is completed, the target point position on the actual road surface is calibrated to the pixel position corresponding to the picture, the relative distance between two reference target points in the road environment is easy to acquire, particularly, the relative distance can be determined by using the prior information in the road environment, if the distance between adjacent lane lines is determined, the positions of the two reference target points in the image are also easy to determine, and therefore, by the mode, the actual coordinate position of the central point can be determined simply and efficiently by means of the two reference target points, without manual calibration in the field in the road.
The following description will be made in detail by taking an example in which the relative distance between two points on a road surface (within a visual detection range) can be determined, but the actual absolute position of the two points on the road surface cannot be determined, and the actual coordinate position of the center point is obtained according to the relative distance between the two points and the corresponding pixel position of the corresponding point imaged in the picture.
From the above expressions (1) to (18), it can be seen that under the known conditions of f and h, the expressions of the actual normal distance at any point can be known, if there is a definite relationship between dy1, y1 and yc or between dy2, y2 and yc, that is, if two variable values are determined, the value of the third variable can be obtained.
When the reference target point is an interior point, there are:
Figure GDA0003356525670000121
when the reference target point is an external point, there are:
Figure GDA0003356525670000122
according to the types of the two reference target points, the following three cases are specifically analyzed:
(1) in the first case: when both reference target points are interior points, then the actual normal distance between the two reference target points is:
Figure GDA0003356525670000123
where Δ y is the actual normal distance between the two reference target points, y 1' and y1 ″ are the normal distances of the two reference target points in the camera coordinate system, respectively, and satisfy:
yc>y1″>y1′ (22)
where dy 1' and dy1 "are the pixel distances from the center point, respectively, that correspond in the image.
Then it can be obtained:
c1=tan(a1-xc1)-tan(b1-xc1) (23)
wherein:
Figure GDA0003356525670000131
Figure GDA0003356525670000132
Figure GDA0003356525670000133
Figure GDA0003356525670000134
if a1, b1 and c1 are all known quantities, the trigonometric function equation related to x is solved, and the solution is:
Figure GDA0003356525670000135
wherein:
E1=tan a1-tan b1 (29)
F1=tan a1+tan b1 (30)
G1=tan a1×tan b1 (31)
Figure GDA0003356525670000136
further, the value of the actual normal distance yc of the center point can be obtained, which is:
Figure GDA0003356525670000137
(2) in the second case: when both reference target points are outliers, then the actual normal distance between the two reference target points is:
Figure GDA0003356525670000141
wherein y2 'and y 2' are the normal distances of the two reference target points in the camera coordinate system, respectively, and satisfy:
y2″>y2′>yc
where dy 2' and dy2 "are the pixel distances between the two reference target points of the outer point and the center point, respectively, in the image.
In the same way, the following can be obtained:
c2=tan(a2-xc2)-tan(b2-xc2) (35)
wherein the content of the first and second substances,
Figure GDA0003356525670000142
Figure GDA0003356525670000143
Figure GDA0003356525670000144
Figure GDA0003356525670000145
the a2, b2 and c2 are all known quantities, and the trigonometric function equation related to x is solved to obtain the solution:
Figure GDA0003356525670000146
wherein:
E2=tan a2-tan b2 (41)
F2=tan a2+tan b2 (42)
G2=tan a2×tan b2 (43)
Figure GDA0003356525670000151
further, the value of the actual normal distance yc of the center point is obtained as follows:
yc=h×tan xc2 (45)
(3) in the third case: if one of the points is an inner point and one of the points is an outer point, the normal distances corresponding to the camera coordinate system are y11 and y22, respectively, the distance between the two points is:
Figure GDA0003356525670000152
where dy22 and dy11 are the pixel distances from the center point of the image corresponding to the two reference target points, respectively, and the correspondence can be simplified as follows:
Figure GDA0003356525670000153
the equation has no analytic solution, only can obtain an approximate solution corresponding to a numerical value, and is easy to generate a certain error, so when two reference target points are selected, the two reference target points are preferably the inner point or the outer point, and the condition that one reference target point is selected as the inner point and the other reference target point is not selected as the outer point is avoided.
Based on the above analysis, if the two reference target points are all interior points, that is, the Y-axis distance between the two reference target points is smaller than the ordinate value of the center point, the first relationship model specifically adopts equations (28) to (34), that is:
Figure GDA0003356525670000161
Figure GDA0003356525670000162
E1=tan a1-tan b1
F1=tan a1+tan b1
G1=tan a1×tan b1
Figure GDA0003356525670000163
Figure GDA0003356525670000164
Figure GDA0003356525670000165
Figure GDA0003356525670000166
wherein h is the installation height of the image acquisition device, f is the focal length value of the image acquisition device, dy 1' and dy1 ″ are the pixel distances between two reference target points which are inner points and a central point in the image respectively, and yc is the actual coordinate position of the central point.
If the two reference target points are both outer points, that is, the Y-axis distance between the two reference target points is greater than the ordinate value of the center point, the expression of the first relational model adopts equations (41) to (46), that is:
yc=h×tan xc2
Figure GDA0003356525670000167
E2=tan a2-tan b2
F2=tan a2+tan b2
G2=tan a2×tan b2
Figure GDA0003356525670000168
Figure GDA0003356525670000169
Figure GDA00033565256700001610
Figure GDA0003356525670000171
Figure GDA0003356525670000172
wherein dy 2' and dy2 ″ are pixel distances between two reference target points which are outer points and a central point in an image respectively, and yc is an actual coordinate position of the central point on a road surface.
In this embodiment, in addition to the above, the specific form of the first relationship model for calculating the actual coordinate of the central point based on the two reference target points may also be adaptively adjusted based on the above form according to actual requirements, for example, adjustment factors, weight coefficients, and the like are added, and even other forms may be adopted to construct and form the first relationship model.
By adopting the above manner, the actual coordinate of the central point can be simply, conveniently and accurately determined only by acquiring the actual relative distance between two corresponding points in the visual imaging and without specifically acquiring the actual positions of the two points, so that the whole calibration process can be rapidly and efficiently completed, and the position of the target point on the actual road surface is calibrated to the pixel position corresponding to the picture.
In a specific application embodiment, in the step S01, two reference target points may be specifically and respectively taken from two adjacent lane dotted lines in the traffic road surface, since the distance between the lane dotted line and the two adjacent lane dotted lines is generally a standard determined distance, and the distance between the adjacent lane dotted lines may be directly determined, and then the relative distance between the two reference target points may be determined according to the distance between the two adjacent lane dotted lines, so as to conveniently determine the relative distance between the two reference target points by using the distance prior information between the two adjacent lane dotted lines, and certainly, the two reference target points may also be selected by using other scenes with fixed-length identifiers in the traffic scene road surface. If the conditions on the two sides of the traffic road surface allow, two reference target points can be taken on the side of the traffic road surface, and the relative distance between the two reference target points is obtained by measuring the distance between the two reference target points. Based on the actual position coordinates of the central point, the actual position coordinates of all points of the road surface of the traffic scene can be mapped and calibrated to the pixel coordinates in the imaged road surface according to the actual position coordinates of the central point.
In a specific application embodiment, the selection manner of the two reference target points may be determined according to a specific traffic scene:
1. if a marker with fixed-length marks exists in the current traffic scene, so that the relative distance between two points, such as a lane dotted line, two adjacent lane dotted lines and the like, can be obviously known, two reference target points are selected by using the marker, the distance between the two reference target points is determined, the extraction of the actual position coordinate of the central point D is completed according to the steps, and the position calibration of the corresponding pixel of any point on the actual road surface in the formed picture is further completed by the step S02;
2. if the marker with the fixed-length mark does not exist in the current traffic scene, but the road surface side of the traffic scene is suitable for the movement of an operator, two reference target points can be selected on the traffic road surface side, the relative distance between the two reference target points can be measured, the measuring mode can adopt a direct measuring mode such as a rope with a fixed length and the like, and can also adopt a measuring mode such as radar ranging and the like, after the extraction of the actual position coordinate of the central point D is completed according to the steps, the position calibration of the corresponding pixel of any point on the actual road surface in the formed image is completed according to the step S02.
In order to reduce errors and obtain more accurate actual position coordinates of the central point D, a plurality of tests can be performed, each time a group of actual position coordinates of the central point D is obtained according to the above scheme, and then a statistical value (such as an average value) is obtained for each obtained actual position coordinate of the central point. The number of tests can be determined according to actual requirements, and the preferable number of tests N is not more than 4.
In this embodiment, the coordinate position relationship model in step S02 is specifically constructed according to the pixel coordinate position of the target point in the image, the actual coordinate position of the target point, and the relationship between the actual coordinate positions of the central points. As can be seen from the above equations (6), (9), (15) and (18), the actual coordinates of any target point and the central point and the pixel coordinates in the image have a certain correlation, and after the actual coordinate position of the central point is obtained, the imaging position of the target point in the calibration image can be calibrated based on the actual coordinates of the target point by using the actual coordinate position of the central point. Specifically, if the coordinate position relationship model is an inner point, the coordinate position relationship model is represented by equations (6) and (9), and if the coordinate position relationship model is an outer point, the coordinate position relationship model is represented by equations (15) and (18). Of course, the specific form of the coordinate position relationship model may be adjusted adaptively based on the above form according to actual requirements, such as increasing adjustment factors, weighting coefficients, etc., and may even be constructed in other forms, where the key is to construct the relationship between the actual coordinates of the target point and the central point and the pixel coordinates in the image, so that the pixel coordinates of the target point in the image may be calibrated by using the actual coordinates of the target point, the actual coordinates of the central point, and the pixel coordinates of the central point in the image.
The embodiment can be applied to various fixed traffic scenes, and after the absolute coordinates of various targets on the traffic road surface are extracted, namely, on the premise of ensuring the calibration accuracy and the operation simplicity, the one-to-one corresponding calibration of the pixel coordinates and the actual distance of the relevant road surface part in the visual image can be realized, meanwhile, by simplifying the steps of fusion calibration, the actual position coordinates of the central point of the road surface do not need to be calibrated manually, the mapping calibration of the actual position coordinates of all points on the road surface of the traffic scene to the pixel coordinates in the imaged road surface can be completed quickly and efficiently, the danger caused by the operation of manual calibration in the road surface is avoided, the dependence on manual installation operation is reduced, the process of mapping and calibrating the actual position coordinates of all points of the road surface of the traffic scene to the pixel coordinates in the imaged road surface can be efficiently and accurately finished.
The embodiment also comprises a device for calibrating the position of the visual detection target in the scene of the fixed traffic side, wherein the device comprises a processor and a memory, the memory is used for storing a computer program, the processor is used for executing the computer program, and the processor is used for executing the computer program to execute the calibration method.
The embodiment further includes a computer readable storage medium storing a computer program, and the computer program realizes the calibration method when executed.
Example 2:
as shown in fig. 6, the method for calibrating the position of the visual detection target in the fixed traffic roadside scene according to the embodiment includes the following steps:
s01, determining the position of a center point: acquiring an actual coordinate position of a reference target point and a corresponding pixel coordinate position of the reference target point in an image, fixedly arranging a to-be-calibrated vision sensor at a specified position of a road side, positioning the reference target point on a traffic road surface in a vision detection range of the to-be-calibrated vision sensor, and determining the actual coordinate position of a central point according to the acquired coordinate position;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point, so as to complete the calibration of the target point to be calibrated in the image.
The present embodiment is similar to the basic principle of embodiment 1, except that in step S01, a reference target point is used to determine the actual coordinate position of the center point. Because the corresponding pixel coordinates of the target point in the image, the actual position coordinates of the target point and the actual position coordinates of the central point have a determined relationship, if the actual position coordinates and the pixel coordinates corresponding to any target point in the road surface of the traffic scene can be determined, the actual position coordinates of the central point can be obtained by utilizing the relationship of the three coordinates, and then the mapping and calibration of the actual position coordinates of all points on the road surface of the traffic scene to the pixel coordinates in the imaged road surface can be completed.
In step S01 of this embodiment, a second relationship model between the actual coordinate position of the target point, the pixel coordinate position of the target point in the image, and the actual coordinate position of the central point is specifically pre-constructed, and after the actual coordinate position and the pixel coordinate position of the reference target point are obtained, the actual coordinate position of the central point is determined using the second relationship model.
As can be seen from embodiment 1, under the known conditions of f and h, the relationship between dy1, y1 and yc or between dy2, y2 and yc is determined, that is, the values of two variables are known, and the value of the third variable can be obtained, wherein the target point is determined to be an internal point and an external point, and the imaging pixel position of the target point in the image can be compared with the central point pixel position obtained from the internal reference matrix; when the target point is an inner point, the actual normal direction distance of the central point is as follows:
Figure GDA0003356525670000191
where h is the installation height of the image capturing device, f is the focal length value of the image capturing device, dy1 is the pixel distance between the vertical direction and the center point of the reference target point of the interior point in the image, y1 is the normal distance in the actual coordinate position of the reference target point of the interior point, and yc is the actual coordinate position of the center point in the vertical direction.
When the target point is an outer point, the actual normal direction distance of the central point is as follows:
Figure GDA0003356525670000201
where dy2 is the pixel distance between the vertical direction of the reference target point of the outer point and the center point in the image, and y2 is the normal distance in the actual coordinate position of the reference target point of the outer point.
According to the equation, when the actual position distance of any target point and the pixel position imaged in the image corresponding to the target point are obtained, the actual normal distance of the central point can be obtained, and the actual coordinate position of the central point can be obtained by calibrating any point on the road surface. When the reference target point is an interior point, i.e., the Y-axis distance of the reference target point is smaller than the ordinate value of the center point, the expression of the second relational model is as shown in the above equation (47), and when the reference target point is an exterior point, i.e., the Y-axis distance of the reference target point is larger than the ordinate value of the center point, the expression of the second relational model is as shown in the above equation (48).
In this embodiment, the entire calibration process may be specifically completed by combining radar ranging and visual imaging, where a radar is used to obtain an actual position coordinate of a target point, a camera is used to obtain a pixel position corresponding to the target point, and after the obtaining of the actual position coordinate of the central point D is completed in the above manner, the target point position on the actual road surface is calibrated to the pixel position corresponding to the picture in the same manner as in step S02 in embodiment 1.
In a specific application embodiment, a camera and a radar are adopted to record the actual position coordinates of a moving vehicle on a road surface at the same moment and the pixel coordinates on a corresponding picture; because vehicles generally occupy a certain pixel area, when a corresponding pixel point is selected, a point close to the road surface in front is preferably selected as a corresponding pixel coordinate. Further, the final actual coordinate of the center point D may be obtained by a plurality of tests and then taking a statistical average.
The calibration mode is suitable for occasions without fixed distance markers in traffic scenes and with road sides which are not suitable for measurement operation, and can be determined according to actual scene requirements.
Example 3:
in this embodiment, calibration is implemented by combining the embodiment 1 and the embodiment 2, that is, when the position of the central point is determined, a manner for determining the position of the central point is determined according to a real-time traffic scene environment, if it is monitored that a marker with a fixed distance identifier, such as a lane line, exists in the current traffic scene environment or an interference obstacle affecting measurement does not exist on the roadside, the manner in the embodiment 1 is automatically switched to be used for determining the position of the central point, otherwise, the manner in the embodiment 2 is automatically switched to be used for determining the position of the central point, so that accurate calibration can be automatically adapted to different traffic scene environments.
As shown in fig. 7, the detailed steps for implementing the visual target position information calibration in the fixed traffic roadside scene in this embodiment are as follows:
step 1: after the camera is fixedly installed on the road side, correcting the camera to obtain parameters of the internal reference matrix;
step 2: acquiring and identifying an image of a current traffic scene environment in real time, judging whether a marker with a fixed distance mark exists in the current traffic scene environment or whether an interference barrier exists on the road side, if the marker with the fixed distance mark exists or the interference-free barrier on the road side can be measured, turning to step 3, otherwise, turning to step 4;
and step 3: taking two reference target points, obtaining the relative distance between the two reference target points, determining the actual coordinate position of the central point D according to the mode of the step S01 in the embodiment 1, and turning to the step 5;
and 4, step 4: taking a reference target point, obtaining the actual position coordinates of the reference target point and the pixel coordinates in the image by combining the radar pre-vision sensor, determining the actual coordinate position of the central point D according to the mode of the step S01 in the embodiment 2, and turning to the step 5;
and 5: acquiring the actual coordinate position of the target point to be calibrated, calculating the pixel coordinate position of the target point to be calibrated in the image according to the actual coordinate position of the target point to be calibrated and the determined actual coordinate position of the central point in the manner of step S02 in embodiment 1, and completing the calibration of the target point to be calibrated in the image.
Through the mode, the position of the central point can be determined in a suitable mode according to different traffic scene environments, so that different traffic scenes can be automatically matched, and efficient and accurate calibration can be realized under various traffic scene environments.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (8)

1. A visual detection target position calibration method under a fixed traffic roadside scene is characterized by comprising the following steps:
s01, determining the position of a center point: acquiring a relative distance between two reference target points and a corresponding pixel coordinate position in an image to be calibrated acquired by a vision sensor to be calibrated, wherein the vision sensor to be calibrated is fixedly arranged at a specified position of a roadside, the reference target points are positioned on a traffic road surface in a vision detection range, and according to the acquired relative distance and the pixel coordinate position, an actual coordinate position of a central point in the traffic road surface in the vision detection range is determined, the imaging correspondence of the central point in the detection range on the traffic road surface is a central position of an image coordinate, the actual coordinate position is an actual coordinate position on the traffic road surface, and the pixel coordinate position is a coordinate position in the image;
s02, position information calibration: acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image to be calibrated according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point to finish calibration;
in step S01, a first relationship model between the relative distance between two reference target points, the pixel distance between the two reference target points and the central point in the image, and the actual coordinate position of the central point in the vertical direction is pre-constructed, the first relationship model is calculated according to the installation height of the image acquisition device, the focal length value of the image acquisition device, the relative distance between the two reference target points, and the pixel distance between the two reference target points and the central point in the image to obtain the actual coordinate position of the central point in the vertical direction, and after the relative distance between the two reference target points and the pixel coordinate position are obtained, the actual coordinate position of the central point is determined using the first relationship model.
2. The method for calibrating the position of the visual detection target under the scene of the fixed traffic road side according to claim 1, characterized in that: if the two reference target points are interior points, that is, the Y-axis distance between the two reference target points is smaller than the ordinate value of the center point, the expression of the first relationship model is as follows:
Figure FDA0003356525660000011
Figure FDA0003356525660000012
E1=tan a1-tan b1
F1=tan a1+tan b1
G1=tan a1×tan b1
Figure FDA0003356525660000021
Figure FDA0003356525660000022
Figure FDA0003356525660000023
Figure FDA0003356525660000024
h is the installation height of the image acquisition equipment, f is the focal length value of the image acquisition equipment, dy1 'and dy 1' are the pixel distances between two reference target points which are inner points and a central point in an image respectively, yc is the actual coordinate position of the central point in the vertical direction, and deltay is the actual normal distance between the two reference target points;
if the two reference target points are both outer points, that is, the Y-axis distance of the two reference target points is greater than the ordinate value of the center point, the expression of the first relational model is as follows:
yc=h×tan xc2
Figure FDA0003356525660000025
E2=tan a2-tan b2
F2=tan a2+tan b2
G2=tan a2×tan b2
Figure FDA0003356525660000026
Figure FDA0003356525660000027
Figure FDA0003356525660000028
Figure FDA0003356525660000029
Figure FDA00033565256600000210
wherein dy 2' and dy2 ″ are pixel distances between two reference target points which are outer points and a central point in an image respectively, and yc is an actual coordinate position of the central point on a road surface in the vertical direction.
3. The method for calibrating the position of a visually detected target under a scene at the side of a fixed traffic road according to claim 1, wherein in step S01, two reference target points are respectively taken from the virtual lines of two adjacent lanes on the traffic road, and the relative distance between the two reference target points is determined according to the distance between the virtual lines of two adjacent lanes; or in step S01, two reference points are taken from both sides of the traffic road, and the distance between the two reference points is measured to obtain the relative distance between the two reference points.
4. The method for calibrating the position of a visual detection target under the fixed traffic roadside scene according to any one of claims 1 to 3, wherein the coordinate position relationship model is constructed according to the pixel coordinate position of the target point in the image, the actual coordinate position of the target point and the actual coordinate position of the central point.
5. A visual detection target position calibration method under a fixed traffic roadside scene is characterized by comprising the following steps:
s01, determining the position of a center point: acquiring an actual coordinate position of a reference target point and a corresponding pixel coordinate position of the reference target point in an image, wherein a to-be-calibrated vision sensor is fixedly arranged at a specified position of a roadside, the reference target point is positioned on a traffic road surface within a vision detection range of the to-be-calibrated vision sensor, the actual coordinate position of a central point is determined according to the acquired coordinate position, the imaging correspondence of the central point is the central position of an image coordinate, the actual coordinate position is the actual coordinate position on the traffic road surface, and the pixel coordinate position is the coordinate position in the image;
s02, position information calibration: acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point so as to finish the calibration of the target point to be calibrated in the image;
in step S01, a second relationship model between the actual coordinate position of the reference target point, the pixel coordinate position of the reference target point in the image, and the actual coordinate position of the vertical center point is pre-constructed, where the second relationship model is calculated according to the installation height of the image capturing device, the focal length value of the image capturing device, the actual coordinate position of the reference target point, and the pixel coordinate position of the reference target point in the image to obtain the actual coordinate position of the vertical center point, and after the actual coordinate position and the pixel coordinate position of the reference target point are obtained, the actual coordinate position of the vertical center point is determined by using the second relationship model.
6. The method for calibrating a position of a visual detection target under a fixed traffic roadside scene of claim 5, wherein in step S01, when the reference target point is an interior point, that is, the Y-axis distance of the reference target point is smaller than the ordinate value of the center point, the expression of the second relational model is:
Figure FDA0003356525660000041
wherein h is the installation height of the image acquisition equipment, f is the focal length value of the image acquisition equipment, dy1 is the pixel distance between the vertical direction and the central point of the reference target point of the inner point in the image, y1 is the normal distance in the actual coordinate position of the reference target point of the inner point, and yc is the actual coordinate position of the central point in the vertical direction;
when the reference target point is an external point, that is, the Y-axis distance of the reference target point is greater than the ordinate value of the center point, the expression of the second relationship model is as follows:
Figure FDA0003356525660000042
where dy2 is the pixel distance between the vertical direction and the center point of the reference target point of the outer point in the image, y2 is the normal distance in the actual coordinate position of the reference target point of the outer point, and yc is the actual coordinate position of the center point in the vertical direction.
7. A visual detection target position calibration device under a fixed traffic roadside scene, comprising a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program to execute the method according to any one of claims 1-6.
8. A computer-readable storage medium storing a computer program, wherein the computer program when executed implements the method of any one of claims 1 to 6.
CN202111040735.3A 2021-09-07 2021-09-07 Method and device for calibrating position of visual detection target in fixed traffic roadside scene Active CN113496528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111040735.3A CN113496528B (en) 2021-09-07 2021-09-07 Method and device for calibrating position of visual detection target in fixed traffic roadside scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111040735.3A CN113496528B (en) 2021-09-07 2021-09-07 Method and device for calibrating position of visual detection target in fixed traffic roadside scene

Publications (2)

Publication Number Publication Date
CN113496528A CN113496528A (en) 2021-10-12
CN113496528B true CN113496528B (en) 2021-12-14

Family

ID=77997178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111040735.3A Active CN113496528B (en) 2021-09-07 2021-09-07 Method and device for calibrating position of visual detection target in fixed traffic roadside scene

Country Status (1)

Country Link
CN (1) CN113496528B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115401689B (en) * 2022-08-01 2024-03-29 北京市商汤科技开发有限公司 Distance measuring method and device based on monocular camera and computer storage medium
CN115166722B (en) * 2022-09-05 2022-12-13 湖南众天云科技有限公司 Non-blind-area single-rod multi-sensor detection device for road side unit and control method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108489395B (en) * 2018-04-27 2019-03-22 中国农业大学 Vision measurement system structural parameters calibration and affine coordinate system construction method and system
CN109035320B (en) * 2018-08-12 2021-08-10 浙江农林大学 Monocular vision-based depth extraction method
JP7159900B2 (en) * 2019-02-15 2022-10-25 日本電信電話株式会社 Position Coordinate Derivation Device, Position Coordinate Derivation Method, Position Coordinate Derivation Program and System
TWI720447B (en) * 2019-03-28 2021-03-01 財團法人工業技術研究院 Image positioning method and system thereof
CN111982072B (en) * 2020-07-29 2022-07-05 西北工业大学 Target ranging method based on monocular vision
CN113012237A (en) * 2021-03-31 2021-06-22 武汉大学 Millimeter wave radar and video monitoring camera combined calibration method

Also Published As

Publication number Publication date
CN113496528A (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN107703528B (en) Visual positioning method and system combined with low-precision GPS in automatic driving
CN113496528B (en) Method and device for calibrating position of visual detection target in fixed traffic roadside scene
US10909395B2 (en) Object detection apparatus
KR102054455B1 (en) Apparatus and method for calibrating between heterogeneous sensors
CN106289159B (en) Vehicle distance measurement method and device based on distance measurement compensation
CN112348902B (en) Method, device and system for calibrating installation deviation angle of road-end camera
EP3678096A1 (en) Method for calculating a tow hitch position
WO2019238127A1 (en) Method, apparatus and system for measuring distance
US20120281881A1 (en) Method for Estimating the Roll Angle in a Travelling Vehicle
US11971961B2 (en) Device and method for data fusion between heterogeneous sensors
CN111310708B (en) Traffic signal lamp state identification method, device, equipment and storage medium
LU502288B1 (en) Method and system for detecting position relation between vehicle and lane line, and storage medium
CN105809669A (en) Method and apparatus of calibrating an image detecting device
EP3157255A1 (en) Calibration apparatus and calibration method
CN112633035B (en) Driverless vehicle-based lane line coordinate true value acquisition method and device
CN116447979A (en) Binocular vision slope displacement monitoring method and device based on unmanned aerial vehicle
CN116386000A (en) Method and system for measuring obstacle distance based on high-precision map and monocular camera
CN114820793A (en) Target detection and target point positioning method and system based on unmanned aerial vehicle
CN110660229A (en) Vehicle speed measuring method and device and vehicle
CN111382591A (en) Binocular camera ranging correction method and vehicle-mounted equipment
CN112255604A (en) Method and device for judging accuracy of radar data and computer equipment
CN110188665B (en) Image processing method and device and computer equipment
CN108108706B (en) Method and system for optimizing sliding window in target detection
CN111738035A (en) Method, device and equipment for calculating yaw angle of vehicle
CN116430879A (en) Unmanned aerial vehicle accurate guiding landing method and system based on cooperative targets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant