CN113496528A - Method and device for calibrating position of visual detection target in fixed traffic roadside scene - Google Patents

Method and device for calibrating position of visual detection target in fixed traffic roadside scene Download PDF

Info

Publication number
CN113496528A
CN113496528A CN202111040735.3A CN202111040735A CN113496528A CN 113496528 A CN113496528 A CN 113496528A CN 202111040735 A CN202111040735 A CN 202111040735A CN 113496528 A CN113496528 A CN 113496528A
Authority
CN
China
Prior art keywords
point
coordinate position
calibrated
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111040735.3A
Other languages
Chinese (zh)
Other versions
CN113496528B (en
Inventor
彭贵福
黄利雄
张国壁
舒键
张永斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zhongtianyun Technology Co Ltd
Original Assignee
Hunan Zhongtianyun Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhongtianyun Technology Co Ltd filed Critical Hunan Zhongtianyun Technology Co Ltd
Priority to CN202111040735.3A priority Critical patent/CN113496528B/en
Publication of CN113496528A publication Critical patent/CN113496528A/en
Application granted granted Critical
Publication of CN113496528B publication Critical patent/CN113496528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for calibrating a position of a visual detection target in a scene of fixed traffic roadside, wherein the method comprises the following steps: s01, acquiring the relative distance between two reference target points and the corresponding pixel coordinate position in an image to be calibrated acquired by a visual sensor to be calibrated, fixedly arranging the visual sensor to be calibrated at a specified position of a road side, and determining the actual coordinate position of a central point in a traffic road surface within a visual detection range according to the acquired relative distance and the pixel coordinate position; s02, acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image to be calibrated according to the actual coordinate position of the target point to be calibrated and the determined actual coordinate position of the central point and a pre-constructed coordinate position relation model between the target point and the central point to finish calibration. The invention has the advantages of simple realization operation, low cost, high efficiency and precision and the like.

Description

Method and device for calibrating position of visual detection target in fixed traffic roadside scene
Technical Field
The invention relates to the technical field of intelligent traffic, in particular to a method and a device for calibrating a position of a visual detection target in a scene of fixed traffic roadside.
Background
With the rapid increase of the number of road transportation tools, the road transportation situation is more and more complex, not only the traffic jam situation is endless, but also accidents frequently occur, the traffic pressure borne by the traditional traffic management system is increasingly heavy, and the intelligent traffic system can better solve the situations of road transportation. The intelligent traffic is based on Intelligent Traffic (ITS), and technologies such as internet of things, cloud computing, internet, artificial intelligence, automatic control, mobile internet and the like are fully utilized in the traffic field, so that a traffic system has the capabilities of perception, interconnection, analysis, prediction, control and the like in a region, a city and even a larger space-time range, and the operation efficiency and the management level of the traffic system are improved.
For realizing intelligent control in the intelligent traffic, road side units are generally required to be arranged on two sides of a traffic road surface, as shown in fig. 1, the road side units comprise various sensors, such as radars, vision sensors and the like, and the road side units acquire vehicle target information on the road in real time. The basis for realizing intelligent traffic is collection and fusion of data of multiple sensors, information collected by multiple sensors is collected and calibrated to a unified coordinate system, and for example, target information obtained by millimeter wave radar detection is calibrated and fused to a target in camera imaging, so that real-time multi-dimensional monitoring and behavior analysis can be performed on various elements (including people, vehicles, animals and the like) in a traffic scene. To realize effective fusion of multi-sensor information, the most important thing is to calibrate and register the corresponding targets detected by each sensor so as to unify the targets in the same coordinate system, so that only an accurate and efficient calibration method can ensure accurate fusion of the target information detected by each sensor.
The sensors in the road side units in the traffic scene are fixed, namely once the sensors are installed, the detection environment and the traffic scene are fixed and not changed dynamically, so that after calibration is completed once, the actual position coordinates of each point on the road surface in the traffic scene and the pixel coordinates in the image are in one-to-one correspondence and are fixed and not changed. In the prior art, most of the sensors are calibrated by aiming at information fusion of mobile sensors such as vehicle-mounted multiple sensors, and the calibration method of the mobile sensors is not suitable for the calibration of the fixed sensors. The traditional calibration for the camera is usually realized by means of auxiliary calibration equipment, the calibration mode is complex and low in calibration efficiency, and the implementation cost is high due to the fact that special auxiliary calibration equipment is required, and when the calibration mode is applied to calibration of the sensor under a traffic roadside scene, calibration of a traffic road and a calibration image is quickly and accurately realized.
The chinese patent application CN202010401379.2 discloses a camera calibration method, which is characterized in that a calibration board with a specific form is arranged, a plurality of calibration images with the calibration board shot by a camera are obtained, a coding region and a non-coding region of each calibration image are detected, coding information of the coding region and the non-coding region of each calibration image is extracted, and then the coding information is matched with a preset calibration board, and calibration data of the camera is calculated. The scheme needs to complete calibration by means of a calibration plate in a specific form and depending on identification and matching of an image coding region, is complex to realize and high in cost, and is not suitable for calibration between a traffic road surface and a calibration image under a traffic road side scene.
In summary, in the prior art, the characteristic of the fixed traffic roadside scene is not considered in the sensor calibration manner, and it is difficult to quickly and accurately implement the calibration of the traffic road surface and the calibration image, so it is urgently needed to provide a visual target position information calibration applicable to the fixed traffic roadside scene, so that the quick and accurate calibration of the traffic road surface and the calibration image can be realized.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the technical problems in the prior art, the invention provides the method and the device for calibrating the position of the visual detection target under the fixed traffic roadside scene, which have the advantages of simple implementation method, low cost, high efficiency and high precision, and can fully utilize the characteristics of the fixed traffic roadside scene to realize the precise calibration between the traffic road surface and the calibration image of the visual sensor.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
a visual detection target position calibration method under a fixed traffic roadside scene comprises the following steps:
s01, determining the position of a center point: acquiring the relative distance between two reference target points and corresponding pixel coordinate positions in an image to be calibrated acquired by a visual sensor to be calibrated, wherein the visual sensor to be calibrated is fixedly arranged at a specified position of a roadside, the reference target points are positioned on a traffic road surface in a visual detection range, and the actual coordinate position of a central point in the traffic road surface in the visual detection range is determined according to the acquired relative distance and the pixel coordinate positions;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image to be calibrated according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point to finish calibration.
Further, in step S01, a first relationship model between the relative distance between the two reference target points, the pixel distance between the two reference target points and the central point in the image, and the actual coordinate position of the central point is pre-constructed, and after the relative distance between the two reference target points and the pixel coordinate position are obtained, the actual coordinate position of the central point is determined by using the first relationship model.
Further, if the two reference target points are all interior points, that is, the Y-axis distance between the two reference target points is smaller than the ordinate value of the center point, the expression of the first relationship model is as follows:
Figure 777563DEST_PATH_IMAGE001
Figure 234477DEST_PATH_IMAGE002
Figure 433377DEST_PATH_IMAGE003
wherein the content of the first and second substances,his the installation height of the image acquisition device,fis the focal length value of the image acquisition device,dy1dy1the pixel distances between two reference target points which are both interior points and the central point in the image,ycthe actual coordinate position of the central point;
if the two reference target points are both outer points, that is, the Y-axis distance of the two reference target points is greater than the ordinate value of the center point, the expression of the first relationship model is as follows:
Figure 39939DEST_PATH_IMAGE004
wherein the content of the first and second substances,dy2dy2the pixel distances between two reference target points which are outer points and the central point in the image respectively,ycthe actual coordinate position of the central point on the road surface.
Further, in the step S01, two reference target points are respectively taken from the virtual lines of two adjacent lanes on the traffic road surface, and the relative distance between the two reference target points is determined according to the distance between the virtual lines of the two adjacent lanes; or in step S01, two reference points are taken from both sides of the traffic road, and the distance between the two reference points is measured to obtain the relative distance between the two reference points.
Further, the coordinate position relation model is constructed according to the pixel coordinate position of the target point in the image, the actual coordinate position of the target point and the actual coordinate position of the central point.
A visual detection target position calibration method under a fixed traffic roadside scene comprises the following steps:
s01, determining the position of a center point: acquiring an actual coordinate position of a reference target point and a corresponding pixel coordinate position of the reference target point in an image, wherein the visual sensor to be calibrated is fixedly arranged at a specified position of a road side, the reference target point is positioned on a traffic road surface in a visual detection range of the visual sensor to be calibrated, and the actual coordinate position of a central point is determined according to the acquired coordinate position;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point, so as to complete the calibration of the target point to be calibrated in the image.
Further, in step S01, a second relationship model between the actual coordinate position of the target point, the pixel coordinate position of the target point in the image, and the actual coordinate position of the central point is pre-constructed, and after the actual coordinate position and the pixel coordinate position of the reference target point are obtained, the actual coordinate position of the central point is determined using the second relationship model.
Further, in step S01, when the reference target point is an interior point, that is, the Y-axis distance of the reference target point is smaller than the ordinate value of the center point, the expression of the second relationship model is:
Figure 220253DEST_PATH_IMAGE005
wherein the content of the first and second substances,his the installation height of the image acquisition device,fis the focal length value of the image acquisition device,dythe pixel distance between the vertical direction of the reference target point of 1 point and the central point in the image,
Figure 974582DEST_PATH_IMAGE006
method for centering actual coordinate of reference target point as interior pointThe distance between the lines is such that,
Figure 711594DEST_PATH_IMAGE007
the actual coordinate position of the central point in the vertical direction;
when the reference target point is an external point, that is, the Y-axis distance of the reference target point is greater than the ordinate value of the center point, the expression of the second relationship model is as follows:
Figure 172663DEST_PATH_IMAGE008
wherein the content of the first and second substances,dy2 is the pixel distance between the reference target point of the outer point and the central point in the vertical direction in the image, y and 2 is the normal distance in the actual coordinate position of the reference target point of the external point.
A visual detection target position calibration device under a fixed traffic roadside scene comprises a processor and a memory, wherein the memory is used for storing a computer program, the processor is used for executing the computer program, and the processor is used for executing the computer program to execute the method.
A computer-readable storage medium having stored thereon a computer program which, when executed, implements the method as described above.
Compared with the prior art, the invention has the advantages that:
1. according to the invention, the actual coordinate position of the central point is determined according to the reference point, and then the pixel coordinate position of the target point to be calibrated in the image is calculated according to the corresponding relation by utilizing the coordinate position, so that the method is simple and convenient to operate and low in cost, an additional auxiliary calibration tool is not required, the accurate calibration of the traffic road and the image can be realized due to the fact that the characteristic of a fixed traffic roadside scene is fully utilized, and after the absolute coordinates of various targets on the traffic road are obtained, the one-to-one corresponding calibration of the pixel coordinates and the actual distances of the relevant road parts in the visual image can be realized under the condition of ensuring the calibration accuracy and the operation simplicity.
2. The method can be applied to various different fixed traffic scenes, and can realize one-to-one corresponding calibration of the pixel coordinates and the actual distances of the relevant road surface parts in various different traffic scenes on the premise of ensuring the calibration accuracy and the operation simplicity after extracting the absolute coordinates of various targets on the traffic road surface.
3. According to the method, the steps of fusion calibration are simplified, manual calibration of the actual position coordinates of the road surface central point is not needed, danger caused by operation in the road surface due to manual calibration is avoided, dependence on manual installation operation is reduced, and the process of mapping and calibrating the actual position coordinates of all points of the road surface of a traffic scene to the pixel coordinates in the imaged road surface can be efficiently and accurately completed.
Drawings
FIG. 1 is a schematic diagram of a vehicle target detection in a traffic-side scene.
Fig. 2 is a schematic flow chart illustrating an implementation of the visual inspection target position calibration method in a fixed traffic roadside scene in accordance with embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of a principle of a calibration model in a traffic road side scene constructed in embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of the principle of projection of the target point in the first case (inner point) in embodiment 1 of the present invention.
Fig. 5 is a schematic diagram of the principle of projection of the target point in the second case (outer point) in embodiment 1 of the present invention.
Fig. 6 is a schematic flow chart illustrating an implementation of calibrating a position of a visual detection target in a scene of fixed traffic roadside in embodiment 2 of the present invention.
Fig. 7 is a schematic flow chart illustrating an implementation of calibrating a position of a visual detection target in a scene of fixed traffic roadside in embodiment 3 of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and specific preferred embodiments of the description, without thereby limiting the scope of protection of the invention.
Example 1:
as shown in fig. 2, the method for calibrating the position of the visual detection target in the fixed traffic roadside scene in the embodiment includes the following steps:
s01, determining the position of a center point: acquiring the relative distance between two reference target points and corresponding pixel coordinate positions in an image to be calibrated acquired by a visual sensor to be calibrated, wherein the visual sensor to be calibrated is fixedly arranged at a specified position of a roadside, the reference target points are positioned on a traffic road surface in the visual detection range of the visual sensor to be calibrated, and the actual coordinate position of a central point in the traffic road surface in the visual detection range is determined according to the acquired relative distance and the pixel coordinate positions;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image to be calibrated according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point, so as to complete the calibration of the target point to be calibrated in the image.
Considering that in a scene of a fixed traffic road side, after a vision sensor (such as a camera) is fixedly installed, as shown in fig. 1, the imaging of the sensor is simplified into small-hole imaging, and the sensor is fixed at a height ofhThe position of the center point is determined, and then the position of the target point to be calibrated on the road surface can be correspondingly determined in the image by means of the coordinate position of the center point.
The characteristic of calibration under the scene of fixed traffic roadside is considered in the embodiment, the actual coordinate position of the central point is determined according to the reference point, and then the coordinate position is utilized to calculate the pixel coordinate position of the target point to be calibrated in the image according to the corresponding relation, so that the method is simple and convenient to operate, low in cost and free of using an additional auxiliary calibration tool, and the characteristic of the scene of fixed traffic roadside is fully utilized, so that the accurate calibration of the traffic road surface and the image can be realized, and after the absolute coordinates of various targets on the traffic road surface are obtained, the one-to-one calibration of the pixel coordinates and the actual distance of the relevant road surface part in the visual image can be realized under the condition of ensuring the calibration accuracy and the operation simplicity.
In the embodiment, after the vision sensor to be calibrated is fixedly installed, the vision sensor is calibrated and calibrated by internal reference, and the vision sensor is specifically a camera. The camera has errors more or less after being put into use, the camera can be corrected by adopting a classical Zhang calibration method, and an internal reference matrix is obtained after correction:
Figure 602507DEST_PATH_IMAGE009
in practice the abovef x Andf y very close, the mean of the two can be taken as the focus value, i.e.:
Figure 844132DEST_PATH_IMAGE010
wherein the content of the first and second substances,x c andy c respectively, the horizontal and vertical centers of the resulting image.
After the internal reference of the sensor is calibrated, the coordinate relationship between the target point and the central point needs to be further determined. In the following, taking the specific application embodiment as an example, the coordinate relationship between the target point and the central point in the fixed traffic road side scene is analyzed in detail.
As shown in FIG. 3, in the fixed traffic roadside scene, the camera imaging is simplified into small-hole imaging, and the camera is fixed at the heighthThe position of the camera is shot downwards at a certain angle, each point on the traffic road surface corresponds to the pixel of the picture shot by the camera one by one, wherein the Y-axis direction represents the camera methodA line distance direction, an X axis representing a horizontal distance direction, a Z axis representing a height distance direction, a D point defined as a center point imaging a center position corresponding to the image coordinates, a target point P1 represented as an inner point representing a target having a Y-axis distance smaller than the center point, a target point P2 represented as an outer point representing a target having a Y-axis distance larger than the center point.
According to the position relationship between the target point and the central point, the following two situations can be analyzed:
(1) in the first case: when the detected target point is an inner point P1, P1 is projected to the Y axis and the X axis respectively to obtain a projection diagram as shown in fig. 4, where the actual coordinates of the inner point P1 are (X1, Y1), i.e., the normal distance is Y1, the horizontal distance is X1, the angle formed by the P1 point and the Z axis is α, the angle formed by the P1 and the center point D is β, and the angle formed by the D point and the Y axis is γ, so as to obtain the following equation relationship:
Figure 306207DEST_PATH_IMAGE011
(1)
Figure 949678DEST_PATH_IMAGE012
(2)
Figure 284844DEST_PATH_IMAGE013
(3)
Figure 13766DEST_PATH_IMAGE014
(4)
from the above formula, one can obtain:
Figure 30263DEST_PATH_IMAGE015
(5)
then there are:
Figure 528241DEST_PATH_IMAGE016
(6)
wherein dy1Representing the pixel distance between the vertical direction and the center point of the target point P1 in the image,frepresenting the focal length value of the camera.
And because according to the similar theorem of triangle, can get:
Figure 299888DEST_PATH_IMAGE017
(7)
where dx1 represents the pixel distance between the horizontal direction and the center point of the target point P1 in the image, ys1 represents the distance between the target point P1 in the imaging plane and the Z axis, which can be expressed as follows:
Figure 250526DEST_PATH_IMAGE018
(8)
then it can be obtained:
Figure 319982DEST_PATH_IMAGE019
(9)
from (6) and (9), the pixel coordinates of the target point P1 in the image and the actual coordinates of the target point P1 are only (2)x1, y1) Actual coordinates of the center point (xc, yc) And mounting heighthFocal length valuefIn relation to the mounting heighthFocal length valuefIf the coordinate of the central point is a known quantity, the pixel coordinate corresponding to the internal point in the image can be calculated according to the equations (6) and (9) based on the actual coordinate position of the internal point, and the calibration of the internal point in the image is completed.
(2) In the second case: when the detected target point is the outer point P2, the projection diagram shown in FIG. 5 is obtained by projecting P2 to the Y axis and the X axis, respectively, and the actual coordinate of the outer point P2 is (x2,y2) I.e. normal distance ofy2, horizontal distance ofx2, the angle between the P2 point and the Z axis isaThe angle between P2 and the center point D is b, and the angle between D and the Y axis is c, the following equation can be obtained:
Figure 672466DEST_PATH_IMAGE020
(10)
Figure 615014DEST_PATH_IMAGE021
(11)
Figure 52949DEST_PATH_IMAGE022
(12)
Figure 411249DEST_PATH_IMAGE023
(13)
from the above formula, one can obtain:
Figure 883819DEST_PATH_IMAGE024
(14)
then there are:
Figure 997268DEST_PATH_IMAGE025
wherein the content of the first and second substances,dy2 denotes a pixel distance between the vertical direction and the center point of the target point P2 in the image,frepresenting the focal length value of the camera.
Meanwhile, according to the similarity theorem of triangles, the following can be obtained:
Figure 922499DEST_PATH_IMAGE026
(16)
wherein the content of the first and second substances,dx2 denotes a pixel distance between the horizontal direction and the center point of the target point P2 in the image,ys2 represents the distance of the target point from the Z-axis among the imaging planes as follows:
Figure 348406DEST_PATH_IMAGE027
(17)
further, it is possible to obtain:
Figure 409903DEST_PATH_IMAGE028
(18)
as can be seen from the above expressions (15) and (18), the pixel coordinate of the target point P2 in the image is only equal to the actual coordinate of the target point P2 (similar to the principle of inliers) (15)x2, y2) Actual coordinates of the center point (xc,yc) And mounting heighthFocal length valuefIn relation to the mounting heighthFocal length valuefIf the coordinate of the central point is a known quantity, the pixel coordinate corresponding to the inner point in the image can be calculated according to the equations (15) and (18) based on the actual coordinate position of any outer point, and the calibration of any outer point in the image is completed.
From the above analysis, no matter the internal point or the external point, the actual coordinate between any target point and the central point and the pixel coordinate in the image have a definite association relationship, and if the actual coordinate position of the central point can be obtained, the imaging position of the target point in the calibration picture can be calibrated based on the actual coordinate of the target point by means of the actual coordinate position of the central point. Therefore, the determination of the actual coordinate position of the central point is a key for realizing calibration, but in an actual traffic scene, the traffic environment is often complex, no matter in an expressway or an urban road, because the central point D is generally located in the middle of the road surface, the direct manual calibration of the central point in the middle of the road surface is time-consuming, labor-consuming and extremely dangerous, and the manual calibration mode is easy to generate errors, so that the calibration precision cannot be ensured. In view of the above problems, the present embodiment implements determination of the position coordinates of the central point D by means of two reference target points, so as to reduce implementation complexity, ensure implementation safety and reliability, and avoid errors caused by manual acquisition.
In step S01 of this embodiment, a first relationship model between the relative distance between the two reference target points, the pixel distance between the two reference target points and the central point in the image, and the actual coordinate position of the central point is pre-constructed, and after the relative distance between the two reference target points and the pixel coordinate position are obtained, the actual coordinate position of the central point is determined by using the first relationship model. By means of a coordinate mapping relation formed between two target points and a central point, only the relative distance between the two corresponding points in the visual imaging needs to be acquired, the actual coordinates of the central point can be determined according to the relative distance information of the two points without determining the actual positions of the two points, so that the whole calibration process is completed, the target point position on the actual road surface is calibrated to the pixel position corresponding to the picture, the relative distance between two reference target points in the road environment is easy to acquire, particularly, the relative distance can be determined by using the prior information in the road environment, if the distance between adjacent lane lines is determined, the positions of the two reference target points in the image are also easy to determine, and therefore, by the mode, the actual coordinate position of the central point can be determined simply and efficiently by means of the two reference target points, without manual calibration in the field in the road.
The following description will be made in detail by taking an example in which the relative distance between two points on a road surface (within a visual detection range) can be determined, but the actual absolute position of the two points on the road surface cannot be determined, and the actual coordinate position of the center point is obtained according to the relative distance between the two points and the corresponding pixel position of the corresponding point imaged in the picture.
As is apparent from the above formulae (1) to (18), those compounds are knownfhUnder the conditions of (a) under (b),dy1、y1 and
Figure 694254DEST_PATH_IMAGE007
ordy2、y2 andycthe three are in definite relation, namely the value of the third variable can be obtained under the condition of determining two variable values, and the expression of the actual normal distance of any point can be known.
When the reference target point is an interior point, there are:
Figure 44463DEST_PATH_IMAGE029
(19)
when the reference target point is an external point, there are:
Figure 72462DEST_PATH_IMAGE030
(20)
according to the types of the two reference target points, the following three cases are specifically analyzed:
(1) in the first case: when both reference target points are interior points, then the actual normal distance between the two reference target points is:
Figure 254045DEST_PATH_IMAGE031
wherein, DeltayBeing the actual normal distance between two reference target points,y1andy1the normal distances of the two reference target points in the camera coordinate system respectively satisfy the following conditions:
Figure 443718DEST_PATH_IMAGE032
(22)
wherein the content of the first and second substances,dy1anddy1are pixel distances from a center point respectively corresponding among the images.
Then it can be obtained:
Figure 343541DEST_PATH_IMAGE033
(23)
wherein:
Figure 96602DEST_PATH_IMAGE034
(24)
Figure 398270DEST_PATH_IMAGE035
(25)
Figure 758844DEST_PATH_IMAGE036
(26)
Figure 145963DEST_PATH_IMAGE037
(27)
if a1, b1 and c1 are all known quantities, the trigonometric function equation related to x is solved, and the solution is:
Figure 453448DEST_PATH_IMAGE038
(28)
wherein:
Figure 344044DEST_PATH_IMAGE039
(29)
Figure 141098DEST_PATH_IMAGE040
(30)
Figure 749934DEST_PATH_IMAGE041
(31)
Figure 110377DEST_PATH_IMAGE042
(32)
further, the actual normal distance of the center point can be obtainedycThe value of (d) is:
Figure 121059DEST_PATH_IMAGE043
(33)
(2) in the second case: when both reference target points are outliers, then the actual normal distance between the two reference target points is:
Figure 89015DEST_PATH_IMAGE044
(34)
wherein the content of the first and second substances,y2andy2the normal distances of the two points in the camera coordinate system respectively satisfy the following conditions:
Figure 185147DEST_PATH_IMAGE045
wherein the content of the first and second substances,dy2anddy2are pixel distances from a central point in the image, respectively.
In the same way, the following can be obtained:
Figure 896751DEST_PATH_IMAGE046
(35)
wherein the content of the first and second substances,
Figure 699622DEST_PATH_IMAGE047
the a2, b2 and c2 are all known quantities, and the trigonometric function equation related to x is solved to obtain the solution:
Figure 838479DEST_PATH_IMAGE048
(40)
wherein:
Figure 421907DEST_PATH_IMAGE049
further calculating the actual normal distance of the center pointycThe value of (d) is:
Figure 671623DEST_PATH_IMAGE050
(3) in the third case: if one point is an inner point and one point is an outer point, the normal distances corresponding to the camera coordinate system are respectivelyy11 andy22, the distance between the two is:
Figure 843847DEST_PATH_IMAGE051
wherein the content of the first and second substances,dy22 anddy11 are pixel distances from a center point respectively corresponding among the images, and the correspondence can be simplified as follows:
Figure 153606DEST_PATH_IMAGE052
the equation has no analytic solution, only can obtain an approximate solution corresponding to a numerical value, and is easy to generate a certain error, so when two reference target points are selected, the two reference target points are preferably the inner point or the outer point, and the condition that one reference target point is selected as the inner point and the other reference target point is not selected as the outer point is avoided.
Based on the above analysis, if the two reference target points are interior points, that is, the Y-axis distance between the two reference target points is smaller than the ordinate value of the center point, the first relationship model specifically adopts equations (28) - (34), that is:
Figure 224330DEST_PATH_IMAGE053
wherein the content of the first and second substances,his the installation height of the image acquisition device,fis the focal length value of the image acquisition device,dy1dy1the pixel distances between two reference target points which are both interior points and the central point in the image,ycthe actual coordinate position of the center point.
If the two reference target points are external points, that is, the Y-axis distance between the two reference target points is greater than the ordinate value of the center point, the expressions of the first relational model adopt the expressions (41) to (46), that is:
Figure 277736DEST_PATH_IMAGE054
Figure 55200DEST_PATH_IMAGE055
wherein the content of the first and second substances,dy2dy2two reference targets, each being an outlierThe pixel distance of a point in the image from the center point,ycthe actual coordinate position of the central point on the road surface.
In this embodiment, in addition to the above, the specific form of the first relationship model for calculating the actual coordinate of the central point based on the two reference target points may also be adaptively adjusted based on the above form according to actual requirements, for example, adjustment factors, weight coefficients, and the like are added, and even other forms may be adopted to construct and form the first relationship model.
By adopting the above manner, the actual coordinate of the central point can be simply, conveniently and accurately determined only by acquiring the actual relative distance between two corresponding points in the visual imaging and without specifically acquiring the actual positions of the two points, so that the whole calibration process can be rapidly and efficiently completed, and the position of the target point on the actual road surface is calibrated to the pixel position corresponding to the picture.
In a specific application embodiment, in the step S01, two reference target points may be specifically and respectively taken from two adjacent lane dotted lines in the traffic road surface, since the distance between the lane dotted line and the two adjacent lane dotted lines is generally a standard determined distance, and the distance between the adjacent lane dotted lines may be directly determined, and then the relative distance between the two reference target points may be determined according to the distance between the two adjacent lane dotted lines, so as to conveniently determine the relative distance between the two reference target points by using the distance prior information between the two adjacent lane dotted lines, and certainly, the two reference target points may also be selected by using other scenes with fixed-length identifiers in the traffic scene road surface. If the conditions on the two sides of the traffic road surface allow, two reference target points can be taken on the side of the traffic road surface, and the relative distance between the two reference target points is obtained by measuring the distance between the two reference target points. Based on the actual position coordinates of the central point, the actual position coordinates of all points of the road surface of the traffic scene can be mapped and calibrated to the pixel coordinates in the imaged road surface according to the actual position coordinates of the central point.
In a specific application embodiment, the selection manner of the two reference target points may be determined according to a specific traffic scene:
1. if a marker with fixed-length marks exists in the current traffic scene, so that the relative distance between two points, such as a lane dotted line, two adjacent lane dotted lines and the like, can be obviously known, two reference target points are selected by using the marker, the distance between the two reference target points is determined, the extraction of the actual position coordinate of the central point D is completed according to the steps, and the position calibration of the corresponding pixel of any point on the actual road surface in the formed picture is further completed by the step S02;
2. if the marker with the fixed-length mark does not exist in the current traffic scene, but the road surface side of the traffic scene is suitable for the movement of an operator, two reference target points can be selected on the traffic road surface side, the relative distance between the two reference target points can be measured, the measuring mode can adopt a direct measuring mode such as a rope with a fixed length and the like, and can also adopt a measuring mode such as radar ranging and the like, after the extraction of the actual position coordinate of the central point D is completed according to the steps, the position calibration of the corresponding pixel of any point on the actual road surface in the formed image is completed according to the step S02.
In order to reduce errors and obtain more accurate actual position coordinates of the central point D, a plurality of tests can be performed, each time a group of actual position coordinates of the central point D is obtained according to the above scheme, and then a statistical value (such as an average value) is obtained for each obtained actual position coordinate of the central point. The number of tests can be determined according to actual requirements, and the preferable number of tests N is not more than 4.
In this embodiment, the coordinate position relationship model in step S02 is specifically constructed according to the pixel coordinate position of the target point in the image, the actual coordinate position of the target point, and the relationship between the actual coordinate positions of the central points. As can be seen from the above equations (6), (9), (15) and (18), the actual coordinates of any target point and the central point and the pixel coordinates in the image have a certain correlation, and after the actual coordinate position of the central point is obtained, the imaging position of the target point in the calibration image can be calibrated based on the actual coordinates of the target point by using the actual coordinate position of the central point. Specifically, if the coordinate position relationship model is an inner point, the coordinate position relationship model is represented by equations (6) and (9), and if the coordinate position relationship model is an outer point, the coordinate position relationship model is represented by equations (15) and (18). Of course, the specific form of the coordinate position relationship model may be adjusted adaptively based on the above form according to actual requirements, such as increasing adjustment factors, weighting coefficients, etc., and may even be constructed in other forms, where the key is to construct the relationship between the actual coordinates of the target point and the central point and the pixel coordinates in the image, so that the pixel coordinates of the target point in the image may be calibrated by using the actual coordinates of the target point, the actual coordinates of the central point, and the pixel coordinates of the central point in the image.
The embodiment can be applied to various fixed traffic scenes, and after the absolute coordinates of various targets on the traffic road surface are extracted, namely, on the premise of ensuring the calibration accuracy and the operation simplicity, the one-to-one corresponding calibration of the pixel coordinates and the actual distance of the relevant road surface part in the visual image can be realized, meanwhile, by simplifying the steps of fusion calibration, the actual position coordinates of the central point of the road surface do not need to be calibrated manually, the mapping calibration of the actual position coordinates of all points on the road surface of the traffic scene to the pixel coordinates in the imaged road surface can be completed quickly and efficiently, the danger caused by the operation of manual calibration in the road surface is avoided, the dependence on manual installation operation is reduced, the process of mapping and calibrating the actual position coordinates of all points of the road surface of the traffic scene to the pixel coordinates in the imaged road surface can be efficiently and accurately finished.
The embodiment also comprises a device for calibrating the position of the visual detection target in the scene of the fixed traffic side, wherein the device comprises a processor and a memory, the memory is used for storing a computer program, the processor is used for executing the computer program, and the processor is used for executing the computer program to execute the calibration method.
The embodiment further includes a computer readable storage medium storing a computer program, and the computer program realizes the calibration method when executed.
Example 2:
as shown in fig. 6, the method for calibrating the position of the visual detection target in the fixed traffic roadside scene according to the embodiment includes the following steps:
s01, determining the position of a center point: acquiring an actual coordinate position of a reference target point and a corresponding pixel coordinate position of the reference target point in an image, fixedly arranging a to-be-calibrated vision sensor at a specified position of a road side, positioning the reference target point on a traffic road surface in a vision detection range of the to-be-calibrated vision sensor, and determining the actual coordinate position of a central point according to the acquired coordinate position;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point, so as to complete the calibration of the target point to be calibrated in the image.
The present embodiment is similar to the basic principle of embodiment 1, except that in step S01, a reference target point is used to determine the actual coordinate position of the center point. Because the corresponding pixel coordinates of the target point in the image, the actual position coordinates of the target point and the actual position coordinates of the central point have a determined relationship, if the actual position coordinates and the pixel coordinates corresponding to any target point in the road surface of the traffic scene can be determined, the actual position coordinates of the central point can be obtained by utilizing the relationship of the three coordinates, and then the mapping and calibration of the actual position coordinates of all points on the road surface of the traffic scene to the pixel coordinates in the imaged road surface can be completed.
In step S01 of this embodiment, a second relationship model between the actual coordinate position of the target point, the pixel coordinate position of the target point in the image, and the actual coordinate position of the central point is specifically pre-constructed, and after the actual coordinate position and the pixel coordinate position of the reference target point are obtained, the actual coordinate position of the central point is determined using the second relationship model.
From example 1, it is knownfhUnder the conditions of (a) under (b),dy1、y1 andycordy2、y2 andycthe three have a definite relationship, namely knowing two variable values, the value of the third variable can be solved, wherein the target point is judged to be an internal point and an external point, and the imaging pixel position of the target point in the image can be compared with the central point pixel position obtained from the internal reference matrix; when the target point is an inner point, the actual normal direction distance of the central point is as follows:
Figure 535859DEST_PATH_IMAGE056
wherein the content of the first and second substances,his the installation height of the image acquisition device,fis the focal length value of the image acquisition device,dy1 is the pixel distance between the vertical direction of the reference target point of the inner point and the central point in the image,y1 is the normal distance in the actual coordinate position of the reference target point of the inlier,ycis the actual coordinate position of the central point in the vertical direction.
When the target point is an outer point, the actual normal direction distance of the central point is as follows:
Figure 828301DEST_PATH_IMAGE057
wherein the content of the first and second substances,dy2 is the pixel distance between the reference target point of the outer point and the central point in the vertical direction in the image, y and 2 is the normal distance in the actual coordinate position of the reference target point of the external point.
According to the equation, when the actual position distance of any target point and the pixel position imaged in the image corresponding to the target point are obtained, the actual normal distance of the central point can be obtained, and the actual coordinate position of the central point can be obtained by calibrating any point on the road surface. When the reference target point is an interior point, i.e., the Y-axis distance of the reference target point is smaller than the ordinate value of the center point, the expression of the second relational model is as shown in the above equation (47), and when the reference target point is an exterior point, i.e., the Y-axis distance of the reference target point is larger than the ordinate value of the center point, the expression of the second relational model is as shown in the above equation (48).
In this embodiment, the entire calibration process may be specifically completed by combining radar ranging and visual imaging, where a radar is used to obtain an actual position coordinate of a target point, a camera is used to obtain a pixel position corresponding to the target point, and after the obtaining of the actual position coordinate of the central point D is completed in the above manner, the target point position on the actual road surface is calibrated to the pixel position corresponding to the picture in the same manner as in step S02 in embodiment 1.
In a specific application embodiment, a camera and a radar are adopted to record the actual position coordinates of a moving vehicle on a road surface at the same moment and the pixel coordinates on a corresponding picture; because vehicles generally occupy a certain pixel area, when a corresponding pixel point is selected, a point close to the road surface in front is preferably selected as a corresponding pixel coordinate. Further, the final actual coordinate of the center point D may be obtained by a plurality of tests and then taking a statistical average.
The calibration mode is suitable for occasions without fixed distance markers in traffic scenes and with road sides which are not suitable for measurement operation, and can be determined according to actual scene requirements.
Example 3:
in this embodiment, calibration is implemented by combining the embodiment 1 and the embodiment 2, that is, when the position of the central point is determined, a manner for determining the position of the central point is determined according to a real-time traffic scene environment, if it is monitored that a marker with a fixed distance identifier, such as a lane line, exists in the current traffic scene environment or an interference obstacle affecting measurement does not exist on the roadside, the manner in the embodiment 1 is automatically switched to be used for determining the position of the central point, otherwise, the manner in the embodiment 2 is automatically switched to be used for determining the position of the central point, so that accurate calibration can be automatically adapted to different traffic scene environments.
As shown in fig. 7, the detailed steps for implementing the visual target position information calibration in the fixed traffic roadside scene in this embodiment are as follows:
step 1: after the camera is fixedly installed on the road side, correcting the camera to obtain parameters of the internal reference matrix;
step 2: acquiring and identifying an image of a current traffic scene environment in real time, judging whether a marker with a fixed distance mark exists in the current traffic scene environment or whether an interference barrier exists on the road side, if the marker with the fixed distance mark exists or the interference-free barrier on the road side can be measured, turning to step 3, otherwise, turning to step 4;
and step 3: taking two reference target points, obtaining the relative distance between the two reference target points, determining the actual coordinate position of the central point D according to the mode of the step S01 in the embodiment 1, and turning to the step 5;
and 4, step 4: taking a reference target point, obtaining the actual position coordinates of the reference target point and the pixel coordinates in the image by combining the radar pre-vision sensor, determining the actual coordinate position of the central point D according to the mode of the step S01 in the embodiment 2, and turning to the step 5;
and 5: acquiring the actual coordinate position of the target point to be calibrated, calculating the pixel coordinate position of the target point to be calibrated in the image according to the actual coordinate position of the target point to be calibrated and the determined actual coordinate position of the central point in the manner of step S02 in embodiment 1, and completing the calibration of the target point to be calibrated in the image.
Through the mode, the position of the central point can be determined in a suitable mode according to different traffic scene environments, so that different traffic scenes can be automatically matched, and efficient and accurate calibration can be realized under various traffic scene environments.
The foregoing is considered as illustrative of the preferred embodiments of the invention and is not to be construed as limiting the invention in any way. Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical spirit of the present invention should fall within the protection scope of the technical scheme of the present invention, unless the technical spirit of the present invention departs from the content of the technical scheme of the present invention.

Claims (10)

1. A visual detection target position calibration method under a fixed traffic roadside scene is characterized by comprising the following steps:
s01, determining the position of a center point: acquiring the relative distance between two reference target points and corresponding pixel coordinate positions in an image to be calibrated acquired by a visual sensor to be calibrated, wherein the visual sensor to be calibrated is fixedly arranged at a specified position of a roadside, the reference target points are positioned on a traffic road surface in a visual detection range, and the actual coordinate position of a central point in the traffic road surface in the visual detection range is determined according to the acquired relative distance and the pixel coordinate positions;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image to be calibrated according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point to finish calibration.
2. The method for calibrating the position of a visual detection target under a fixed traffic roadside scene of claim 1, wherein in step S01, a first relation model between the relative distance between two reference target points, the pixel distance between two reference target points and a central point in an image, and the actual coordinate position of the central point is pre-constructed, and after the relative distance between two reference target points and the pixel coordinate position are obtained, the actual coordinate position of the central point is determined using the first relation model.
3. The method for calibrating the position of the visual detection target under the scene of the fixed traffic road side according to claim 2, characterized in that: if the two reference target points are interior points, that is, the Y-axis distance between the two reference target points is smaller than the ordinate value of the center point, the expression of the first relationship model is as follows:
Figure 161587DEST_PATH_IMAGE001
Figure 78727DEST_PATH_IMAGE002
wherein the content of the first and second substances,his the installation height of the image acquisition device,fis the focal length value of the image acquisition device,dy1dy1the pixel distances between two reference target points which are both interior points and the central point in the image,ycthe actual coordinate position of the central point;
if the two reference target points are both outer points, that is, the Y-axis distance of the two reference target points is greater than the ordinate value of the center point, the expression of the first relationship model is as follows:
Figure 858465DEST_PATH_IMAGE003
wherein the content of the first and second substances,dy2dy2the pixel distances between two reference target points which are outer points and the central point in the image respectively,ycthe actual coordinate position of the central point on the road surface.
4. The method for calibrating the position of a visually detected target under a scene at the side of a fixed traffic road according to claim 1, wherein in step S01, two reference target points are respectively taken from the virtual lines of two adjacent lanes on the traffic road, and the relative distance between the two reference target points is determined according to the distance between the virtual lines of two adjacent lanes; or in step S01, two reference points are taken from both sides of the traffic road, and the distance between the two reference points is measured to obtain the relative distance between the two reference points.
5. The method for calibrating the position of a visual detection target under the fixed traffic roadside scene according to any one of claims 1 to 4, wherein the coordinate position relationship model is constructed according to the pixel coordinate position of the target point in the image, the actual coordinate position of the target point and the actual coordinate position of the central point.
6. A visual detection target position calibration method under a fixed traffic roadside scene is characterized by comprising the following steps:
s01, determining the position of a center point: acquiring an actual coordinate position of a reference target point and a corresponding pixel coordinate position of the reference target point in an image, wherein the visual sensor to be calibrated is fixedly arranged at a specified position of a road side, the reference target point is positioned on a traffic road surface in a visual detection range of the visual sensor to be calibrated, and the actual coordinate position of a central point is determined according to the acquired coordinate position;
s02, position information calibration: and acquiring the actual coordinate position of the target point to be calibrated, and calculating the pixel coordinate position of the target point to be calibrated in the image according to the actual coordinate position of the target point to be calibrated and the actual coordinate position of the central point determined in the step S01 and a pre-established coordinate position relation model between the target point and the central point, so as to complete the calibration of the target point to be calibrated in the image.
7. The method as claimed in claim 6, wherein in step S01, a second relation model between the actual coordinate position of the target point and the corresponding pixel coordinate position of the target point in the image and the actual coordinate position of the central point is pre-constructed, and after the actual coordinate position and the pixel coordinate position of the reference target point are obtained, the actual coordinate position of the central point is determined using the second relation model.
8. The method for calibrating a position of a visual detection target under a fixed traffic roadside scene of claim 7, wherein in step S01, when the reference target point is an interior point, that is, the Y-axis distance of the reference target point is smaller than the ordinate value of the center point, the expression of the second relational model is:
Figure 253674DEST_PATH_IMAGE004
wherein the content of the first and second substances,his the installation height of the image acquisition device,fis the focal length value of the image acquisition device,dy1 is the pixel distance between the vertical direction of the reference target point of the inner point and the central point in the image,y1 is the normal distance in the actual coordinate position of the reference target point of the inlier,ycthe actual coordinate position of the central point in the vertical direction;
when the reference target point is an external point, that is, the Y-axis distance of the reference target point is greater than the ordinate value of the center point, the expression of the second relationship model is as follows:
Figure 254997DEST_PATH_IMAGE005
wherein the content of the first and second substances,dy2 is the pixel distance between the reference target point of the outer point and the central point in the vertical direction in the image,yand 2 is the normal distance in the actual coordinate position of the reference target point of the external point.
9. A visual detection target position calibration device under a fixed traffic roadside scene, comprising a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for executing the computer program to execute the method according to any one of claims 1-8.
10. A computer-readable storage medium storing a computer program, wherein the computer program when executed implements the method of any one of claims 1 to 8.
CN202111040735.3A 2021-09-07 2021-09-07 Method and device for calibrating position of visual detection target in fixed traffic roadside scene Active CN113496528B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111040735.3A CN113496528B (en) 2021-09-07 2021-09-07 Method and device for calibrating position of visual detection target in fixed traffic roadside scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111040735.3A CN113496528B (en) 2021-09-07 2021-09-07 Method and device for calibrating position of visual detection target in fixed traffic roadside scene

Publications (2)

Publication Number Publication Date
CN113496528A true CN113496528A (en) 2021-10-12
CN113496528B CN113496528B (en) 2021-12-14

Family

ID=77997178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111040735.3A Active CN113496528B (en) 2021-09-07 2021-09-07 Method and device for calibrating position of visual detection target in fixed traffic roadside scene

Country Status (1)

Country Link
CN (1) CN113496528B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115166722A (en) * 2022-09-05 2022-10-11 湖南众天云科技有限公司 Non-blind-area single-rod multi-sensor detection device for road side unit and control method
CN115401689A (en) * 2022-08-01 2022-11-29 北京市商汤科技开发有限公司 Monocular camera-based distance measuring method and device and computer storage medium
CN118089653A (en) * 2024-04-29 2024-05-28 天津市普迅电力信息技术有限公司 Non-contact unmanned aerial vehicle tower elevation measurement method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035320A (en) * 2018-08-12 2018-12-18 浙江农林大学 Depth extraction method based on monocular vision
JP2020134220A (en) * 2019-02-15 2020-08-31 日本電信電話株式会社 Position coordinate derivation device, position coordinate derivation method, position coordinate derivation program, and system
CN111750820A (en) * 2019-03-28 2020-10-09 财团法人工业技术研究院 Image positioning method and system
CN111982072A (en) * 2020-07-29 2020-11-24 西北工业大学 Target ranging method based on monocular vision
US20210041236A1 (en) * 2018-04-27 2021-02-11 China Agricultural University Method and system for calibration of structural parameters and construction of affine coordinate system of vision measurement system
CN113012237A (en) * 2021-03-31 2021-06-22 武汉大学 Millimeter wave radar and video monitoring camera combined calibration method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210041236A1 (en) * 2018-04-27 2021-02-11 China Agricultural University Method and system for calibration of structural parameters and construction of affine coordinate system of vision measurement system
CN109035320A (en) * 2018-08-12 2018-12-18 浙江农林大学 Depth extraction method based on monocular vision
JP2020134220A (en) * 2019-02-15 2020-08-31 日本電信電話株式会社 Position coordinate derivation device, position coordinate derivation method, position coordinate derivation program, and system
CN111750820A (en) * 2019-03-28 2020-10-09 财团法人工业技术研究院 Image positioning method and system
CN111982072A (en) * 2020-07-29 2020-11-24 西北工业大学 Target ranging method based on monocular vision
CN113012237A (en) * 2021-03-31 2021-06-22 武汉大学 Millimeter wave radar and video monitoring camera combined calibration method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115401689A (en) * 2022-08-01 2022-11-29 北京市商汤科技开发有限公司 Monocular camera-based distance measuring method and device and computer storage medium
CN115401689B (en) * 2022-08-01 2024-03-29 北京市商汤科技开发有限公司 Distance measuring method and device based on monocular camera and computer storage medium
CN115166722A (en) * 2022-09-05 2022-10-11 湖南众天云科技有限公司 Non-blind-area single-rod multi-sensor detection device for road side unit and control method
CN115166722B (en) * 2022-09-05 2022-12-13 湖南众天云科技有限公司 Non-blind-area single-rod multi-sensor detection device for road side unit and control method
CN118089653A (en) * 2024-04-29 2024-05-28 天津市普迅电力信息技术有限公司 Non-contact unmanned aerial vehicle tower elevation measurement method

Also Published As

Publication number Publication date
CN113496528B (en) 2021-12-14

Similar Documents

Publication Publication Date Title
CN113496528B (en) Method and device for calibrating position of visual detection target in fixed traffic roadside scene
CN107703528B (en) Visual positioning method and system combined with low-precision GPS in automatic driving
US10909395B2 (en) Object detection apparatus
CN103487034B (en) Method for measuring distance and height by vehicle-mounted monocular camera based on vertical type target
CN106289159B (en) Vehicle distance measurement method and device based on distance measurement compensation
WO2019238127A1 (en) Method, apparatus and system for measuring distance
CN113673282A (en) Target detection method and device
CN103499337B (en) Vehicle-mounted monocular camera distance and height measuring device based on vertical target
JP2020532800A (en) Camera calibration systems and methods using traffic sign recognition, and computer-readable media
LU502288B1 (en) Method and system for detecting position relation between vehicle and lane line, and storage medium
KR102145557B1 (en) Apparatus and method for data fusion between heterogeneous sensors
CN110766760B (en) Method, device, equipment and storage medium for camera calibration
CN111027381A (en) Method, device, equipment and storage medium for recognizing obstacle by monocular camera
CN112348902A (en) Method, device and system for calibrating installation deviation angle of road end camera
CN105809669A (en) Method and apparatus of calibrating an image detecting device
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN110660229A (en) Vehicle speed measuring method and device and vehicle
CN108108706B (en) Method and system for optimizing sliding window in target detection
CN110658353B (en) Method and device for measuring speed of moving object and vehicle
CN111598956A (en) Calibration method, device and system
CN111738035A (en) Method, device and equipment for calculating yaw angle of vehicle
WO2022133986A1 (en) Accuracy estimation method and system
CN110659551A (en) Motion state identification method and device and vehicle
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN113869440A (en) Image processing method, apparatus, device, medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant