WO2021134507A1 - 一种视频监控的定位方法及视频监控系统 - Google Patents

一种视频监控的定位方法及视频监控系统 Download PDF

Info

Publication number
WO2021134507A1
WO2021134507A1 PCT/CN2019/130604 CN2019130604W WO2021134507A1 WO 2021134507 A1 WO2021134507 A1 WO 2021134507A1 CN 2019130604 W CN2019130604 W CN 2019130604W WO 2021134507 A1 WO2021134507 A1 WO 2021134507A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
coordinates
subject
coordinate system
conversion relationship
Prior art date
Application number
PCT/CN2019/130604
Other languages
English (en)
French (fr)
Inventor
李鑫
洪家明
Original Assignee
海能达通信股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 海能达通信股份有限公司 filed Critical 海能达通信股份有限公司
Priority to PCT/CN2019/130604 priority Critical patent/WO2021134507A1/zh
Publication of WO2021134507A1 publication Critical patent/WO2021134507A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • This application relates to the field of video technology, and in particular to a video surveillance positioning method and a video surveillance system.
  • the position of the target object on the screen is calculated in real time using the principle of spherical polar coordinates, that is, the real point is mapped to the spherical polar coordinate system centered on the camera.
  • Perform 2D-3D conversion After the camera is rotated and zoomed, it is converted from spherical polar coordinates to 2D screen coordinates and displayed on the screen.
  • This method can only calculate the calculation of video tags manually added on the screen, which has certain limitations.
  • the main technical problem solved by this application is to provide a video surveillance positioning method and a video surveillance system, which can track the position of the subject in the display screen in real time.
  • a technical solution adopted in this application is to provide a video surveillance positioning method, which is applied to a video surveillance system.
  • the video surveillance system includes a camera, and the camera is used to shoot a subject.
  • the method includes: The three-dimensional coordinates of the shooting target; according to the three-dimensional coordinates, the world coordinates of the shooting target in the world coordinate system are calculated; the first conversion relationship between the world coordinate system and the camera coordinate system is obtained, and the shooting is calculated according to the world coordinates and the first conversion relationship
  • the camera coordinates of the target obtain the second conversion relationship between the camera coordinate system and the image coordinate system, and calculate the image coordinates of the subject according to the camera coordinates and the second conversion relationship; obtain the third conversion relationship between the camera coordinate system and the pixel coordinate system,
  • the pixel coordinates of the subject are calculated according to the image coordinates and the third conversion relationship.
  • the step of calculating the world coordinates of the subject in the world coordinate system includes: setting the height difference between the camera and the subject to a preset value, and the three-dimensional coordinates include a horizontal offset angle and a vertical offset Angle, and calculate the world coordinate according to the horizontal offset angle and vertical offset angle, the world coordinate satisfies the following formula:
  • the height difference is 1, ⁇ is the horizontal offset angle, and ⁇ is the vertical offset angle (x w , y w , z w ) are world coordinates.
  • (x c , y c , z c ) are camera coordinates
  • (x w , y w , z w ) are world coordinates
  • F p and F t are the focal lengths of the camera lens
  • (x i , y i ) are image coordinates
  • (x c , y c , z c ) are camera coordinates.
  • (x i , y i ) are image coordinates
  • (dx, dy) are pixel coordinates
  • WIDTH and HEIGHT are the width and height of the imaging screen of the camera, respectively.
  • the calculation formula of the lens focal length is: with Where v is the horizontal angle of view of the camera, u is the vertical angle of view of the camera, and WIDTH and HEIGHT are the width and height of the camera's imaging screen, respectively.
  • the step of obtaining the three-dimensional coordinates of the subject includes:
  • the three-dimensional coordinates are calculated; the three-dimensional coordinates include the horizontal offset angle and the vertical offset angle; the horizontal offset angle and the vertical offset angle satisfy the following formula :
  • is the horizontal offset angle
  • is the vertical offset angle
  • (A j , A w ) is the longitude and latitude of the camera
  • (B j , B w ) is the longitude and latitude of the subject
  • H is the height difference
  • Dist is the distance between the subject and the camera.
  • dist is the distance between the subject and the camera
  • r is the radius of the earth's equator.
  • the method further includes: the pixel coordinates (d x , D y ) displays the preset label.
  • the method further includes: establishing a world coordinate system and a camera coordinate system with the camera as the origin, and establishing an image coordinate system and a pixel coordinate system based on the imaging screen of the camera.
  • the step of obtaining the three-dimensional coordinates of the subject includes: obtaining the first pixel coordinates of the subject; obtaining a fourth conversion relationship between the first pixel coordinate system and the first image coordinate system, according to the first pixel coordinates and The fourth conversion relationship is to obtain the first image coordinates of the object; the fifth conversion relationship between the first image coordinate system and the first camera coordinate system is obtained, and the first camera of the object is obtained according to the first image coordinates and the fifth conversion relationship.
  • (D x1 , d y1 ) are the first pixel coordinates
  • (x i1 , y i1 ) are the first image coordinates
  • WIDTH and HEIGHT are the width and height of the imaging screen of the camera, respectively.
  • (x c1 , y c1 , z c1 ) are the first camera coordinates
  • F p1 and F t1 are the lens focal lengths of the camera.
  • (x w1 , y w1 , z w1 ) are the first world coordinates
  • ⁇ 1 90°+ ⁇
  • ⁇ 1 270°+ ⁇
  • ( ⁇ , ⁇ ) are the three-dimensional coordinates of the subject.
  • the step of obtaining the three-dimensional coordinates of the subject includes:
  • the video system includes a camera, a display device and a processor, the camera is used to shoot the subject, and the display device is used to display the camera shooting
  • the processor is used to implement the method of any of the above embodiments.
  • the positioning method for video surveillance in this application is applied to a video surveillance system.
  • the method obtains the three-dimensional coordinates of the subject, calculates the world coordinates of the subject according to the three-dimensional coordinates, and calculates the world coordinates of the subject according to the subject.
  • the world coordinates of the camera are calculated to obtain the camera coordinates, the image coordinates are calculated according to the camera coordinates of the subject, and the pixel coordinates are calculated according to the image coordinates of the subject. Calculate the pixel coordinates of the subject to get the position of the subject in the display screen. In this way, not only the real-time position of the fixed target in the display screen can be calculated, but also the real-time position of the moving target in the display screen can be calculated, and the positioning accuracy is high and the application range is wide.
  • FIG. 1 is a schematic flowchart of an implementation manner of a positioning method for video surveillance according to the present application
  • Fig. 2 is a coordinate diagram of the world coordinate system in S11 of Fig. 1;
  • Fig. 3 is a coordinate diagram of the camera coordinate system in S13 of Fig. 1;
  • Fig. 4 is a rotating coordinate diagram of the world coordinate system in S13 of Fig. 1;
  • Fig. 5 is a coordinate diagram of the image coordinate system in S14 of Fig. 1;
  • Fig. 6 is a coordinate diagram of the pixel coordinate system in S15 of Fig. 1;
  • FIG. 7 is a schematic flowchart of another embodiment of S11 in FIG. 1;
  • Fig. 8 is a schematic structural diagram of an embodiment of the video surveillance system of the present application.
  • this application provides a video surveillance positioning method, which has a relatively simple calculation process and high positioning accuracy.
  • the method provided in this application can be applied to a video surveillance system, which includes a camera, and the camera is used to photograph a subject.
  • a video surveillance system which includes a camera
  • FIG. 1 is a schematic flowchart of an embodiment of a positioning method for video surveillance according to the present application. The method specifically includes:
  • the subject can be a tracking object that needs to be tagged.
  • the subject can be a moving target, such as a car on the road, people walking, etc., or a fixed target on the display screen, such as a building, a road, etc.
  • the world coordinate system takes the position of the camera as the origin Ow , the zw axis is vertical and horizontally upward, the xw axis is the horizontal true north direction, and the yw axis is the horizontal true west direction.
  • the world coordinate system can also select other directions that are easily understood by those skilled in the art.
  • the three-dimensional coordinates of the subject include a horizontal offset angle ⁇ and a vertical offset angle ⁇ .
  • the horizontal offset angle ⁇ is the angle between the line between the subject and the camera and the true north direction, which is vertical
  • the offset angle ⁇ is the angle between the line of the subject and the camera and the ground.
  • the three-dimensional coordinates of the subject can be obtained according to the GPS information of the subject.
  • the three-dimensional coordinates ( ⁇ , ⁇ ) of the object can be calculated according to the latitude and longitude of the camera, the longitude and latitude of the object, and the height difference between the camera and the object.
  • (A j , A w ) is the longitude and latitude of the camera
  • (B j , B w ) is the longitude and latitude of the subject
  • H is the height difference between the camera and the subject
  • dist is the subject and the camera the distance between.
  • the distance dist between the subject and the camera satisfies the following formula:
  • the longitude and latitude information of the camera in this embodiment and the longitude and latitude information of the subject can be obtained from the corresponding GPS system, for example, from various GPS data sources such as mobile terminals, walkie-talkies, and automobiles.
  • the world coordinates of the subject can be calculated according to the three-dimensional coordinates. Because in the specific calculation of the world coordinates of the subject, it is only necessary to know the ratio between the coordinates, and it is not necessary to know the precise coordinate values. Therefore, the height difference between the subject and the camera can be set to a preset value during calculation. For example, it can be directly set to 1, and then the world coordinates of the subject can be calculated according to the horizontal offset angle and the vertical offset angle. In a specific embodiment, the world coordinates satisfy the following formula:
  • the height difference between the subject and the camera is 1, (x w , y w , z w ) the world coordinates of the subject in the world coordinate system.
  • S13 Obtain the first conversion relationship between the world coordinate system and the camera coordinate system, and calculate the camera coordinates of the subject according to the world coordinate and the first conversion relationship.
  • the camera coordinates corresponding to the subject can be calculated according to the first conversion relationship between the world coordinate system and the camera coordinate system.
  • a camera coordinate system is established.
  • the camera coordinate system takes the camera as the origin O c
  • the main optical axis of the camera is the z c axis
  • the y c axis is the plane z w O c z C (over w and z C z plane) and the plane through the origin and perpendicular to the line of intersection C of the z direction and y c z w-axis at an acute angle, and then using the left hand direction x c-axis is determined.
  • the camera coordinate system can also select other directions that are easily understood by those skilled in the art.
  • the transformation from the world coordinate system to the camera coordinate system belongs to the rigid body transformation: that is, the subject will not be deformed, only rotation and translation are required.
  • the world coordinate system and the origin of the camera coordinate system are the same, so just rotate. It can be seen from the geometric relationship in Figure 3 that x c is in the x w O w y w plane.
  • the first step is to rotate the world coordinate system counterclockwise around the zw axis by ⁇ so that x w and x c coincide, and the coordinate system at this time is recorded as ( x w0 , y w0 , z w0 ); in the second step, rotate the coordinate system (x w0 , y w0 , z w0 ) counterclockwise ⁇ around x w to make z w and z c coincide, then y w and y c It also automatically overlaps, the rotation is completed, and the rotation effect is shown in Figure 4.
  • the first conversion relationship between the world coordinate system and the camera coordinate system satisfies the following formula:
  • (x c , y c , z c ) are the camera coordinates of the subject in the camera coordinate system.
  • Substituting the obtained world coordinates into the above formula can obtain the camera coordinates of the subject in the camera coordinate system.
  • S14 Acquire a second conversion relationship between the camera coordinate system and the image coordinate system, and calculate the image coordinates of the subject according to the camera coordinate and the second conversion relationship.
  • the image coordinates of the subject can be calculated according to the second conversion relationship between the camera coordinate system and the image coordinate system.
  • an image coordinate system is established.
  • the image coordinate system is based on the imaging screen of the camera, with the center of the imaging screen as the origin o i , the x i axis goes horizontally to the right, and the y i axis goes down vertically, as shown in FIG. 5 .
  • the image coordinate system can also select other directions that are easily understood by those skilled in the art.
  • F p and F t are the focal length of the camera lens, and (x i , y i ) are the image coordinates.
  • the focal lengths of the lens F p and F t satisfy the following formula: with Where v is the horizontal field of view of the camera and u is the vertical field of view of the camera. Among them, the horizontal field of view v and vertical field of view u of the camera can be obtained by looking up the table in the relevant parameters of the camera.
  • S15 Acquire a third conversion relationship between the image coordinate system and the pixel coordinate system, and calculate the pixel coordinates of the subject according to the image coordinates and the third conversion relationship.
  • the pixel coordinates of the subject are calculated according to the third conversion relationship between the image coordinate system and the pixel coordinate system.
  • a pixel coordinate system is established, and the pixel coordinate system is based on the imaging screen of the camera.
  • the pixel coordinate system is based on the origin O in the upper left corner of the imaging screen, the dx axis is horizontally to the right, and the dy axis is vertically downward.
  • the pixel coordinate system can also select other directions that are easily understood by those skilled in the art.
  • the third conversion relationship between the image coordinate system and the pixel coordinate system satisfies the following formula:
  • (dx, dy) are the pixel coordinates of the subject in the pixel coordinate system
  • WIDTH and HEIGHT are the width and height of the imaged image of the camera, respectively.
  • Substituting the acquired image coordinates into the formula of the third conversion relationship above can obtain the pixel coordinates of the subject.
  • the above embodiment can obtain the pixel coordinates of the subject through GPS information, and realize the real-time positioning of the subject on the display screen, and the positioning is relatively accurate. That is, the application can not only calculate the fixed
  • the real-time position of the target in the display screen can also be very simple and accurate real-time tracking of the moving target, which has strong practicability.
  • a preset label can be displayed on the corresponding pixel coordinates, so that the added preset label can move with the subject.
  • the step of obtaining the three-dimensional coordinates of the subject in this embodiment includes:
  • S111 Acquire the first pixel coordinates of the subject.
  • a first pixel coordinate system is established, where the first pixel coordinate system can be the same as the pixel coordinate system in the foregoing embodiment.
  • the first pixel coordinate system is also established based on the imaging screen of the camera, with the upper left corner of the imaging screen as the origin, the d x1 axis goes horizontally to the right, and the d y1 axis goes down vertically.
  • the first pixel coordinates of the subject are known at this time, and the horizontal offset angle of the subject relative to the camera can also be deduced according to the first pixel coordinates.
  • ⁇ and the vertical offset angle ⁇ , ( ⁇ , ⁇ ) are the three-dimensional coordinates of the subject.
  • S112 Obtain a fourth conversion relationship between the first pixel coordinate system and the first image coordinate system, and obtain the first image coordinates of the subject according to the first pixel coordinate and the fourth conversion relationship.
  • the first image coordinate system is established, and the first image coordinate system may be the same as the image coordinate system in the foregoing embodiment.
  • the first image coordinate system is based on the imaging screen of the camera. For details, please refer to the establishment of the image coordinate system in FIG. 5, which will not be repeated here.
  • the fourth conversion relationship between the first pixel coordinate system and the first image coordinate system is obtained, and then the first image coordinate of the subject is obtained according to the first pixel coordinate and the obtained fourth conversion relationship.
  • (D x1 , d y1 ) are the first pixel coordinates
  • (x i1 , y i1 ) are the first image coordinates
  • WIDTH and HEIGHT are the width and height of the imaging screen of the camera, respectively.
  • S113 Obtain a fifth conversion relationship between the first image coordinate system and the first camera coordinate system, and obtain the first camera coordinates of the subject according to the first image coordinate and the fifth conversion relationship.
  • the first camera coordinate system is established.
  • the first camera coordinate system may be the same as the camera coordinate system in the above-mentioned embodiment. Specifically, it may participate in the establishment of the camera coordinate system in FIG. 3, which will not be repeated here. Since the first camera coordinate system to the first image coordinate system belongs to perspective projection, and perspective projection is a many-to-one relationship, the first image coordinates (x i1 , y i1 ) are converted to the first camera coordinates ( x c1 , y c1 , z c1 ), the camera coordinate z c1 cannot be determined. However, what this embodiment needs to calculate is the horizontal offset angle ⁇ and the vertical offset angle ⁇ of the object, that is, the proportional relationship between the various coordinates. Therefore, when calculating the first camera coordinates x c1 and y c1 , z c1 can be set to the first preset value, for example, z c1 can be directly set to 1.
  • the first camera coordinates of the subject are calculated according to the fifth conversion relationship and the first image coordinates.
  • the fifth conversion relationship satisfies the following formula:
  • (x c1 , y c1 , z c1 ) are the first camera coordinates of the subject in the first camera coordinate system
  • F p1 and F t1 are the lens focal lengths corresponding to the camera.
  • S114 Obtain a sixth conversion relationship between the first camera coordinate system and the first world coordinate system, and obtain the first world coordinates of the subject according to the first camera coordinate and the sixth conversion relationship.
  • Establish the first world coordinate system For the establishment of the first world coordinate system, please refer to the establishment of the world coordinate system in Figure 2. Converting the first camera coordinates (x c1 , y c1 , z c1 ) to the first world coordinates (x w1 , y w1 , z w1 ) can also be achieved by rotating the coordinate system.
  • the first step is to change the first camera coordinates Rotate ⁇ 1 counterclockwise around X c1 to make Z c1 coincide with Z w1 ; the second step is to rotate the camera coordinate system around Z c1 counterclockwise ⁇ 1 to make X c1 coincide with X w1 , and then Y c1 and Y w1 are also Automatically coincide and complete rotation.
  • the sixth conversion relationship between the first camera coordinate system and the first world coordinate system is obtained, and the first world coordinates of the subject are obtained according to the first camera coordinates and the obtained sixth conversion relationship.
  • the sixth conversion relationship satisfies the following formula :
  • the three-dimensional coordinates of the subject are obtained.
  • the three-dimensional coordinates Meet the following formula:
  • the three-dimensional coordinates corresponding to the point can be derived by displaying the pixel coordinates of a certain subject on the screen.
  • the new pixel coordinates in the imaging frame can be calculated according to the three-dimensional coordinates.
  • the positioning method provided by the present application has a simple calculation process, high positioning accuracy, and has a wide range of applications.
  • FIG. 8 is a schematic structural diagram of an embodiment of the video surveillance system provided by the present application.
  • the video surveillance system includes a camera 81, a display device 82, and a processor 83.
  • the camera 81 is used to shoot a subject, and the camera 81 includes video shooting equipment such as a high-altitude dome camera and a pinhole camera.
  • the display device 82 establishes a connection with the camera 81, and the display device 82 is used to display the image of the subject captured by the camera 81.
  • the processor 83 is configured to implement the video surveillance positioning method of any of the foregoing embodiments.
  • the processor 83 is configured to implement the video surveillance positioning method of any of the foregoing embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种视频监控的定位方法,该方法应用于视频监控系统,视频监控系统包括摄像机,摄像机用于拍摄被摄目标,该方法包括:获取被摄目标的三维坐标;根据三维坐标,计算得到被摄目标在世界坐标系中的世界坐标;获取世界坐标系与相机坐标系的第一转换关系,根据世界坐标和第一转换关系计算得到被摄目标的相机坐标;获取相机坐标系和图像坐标系的第二转换关系,根据相机坐标和第二转换关系计算得到被摄目标的图像坐标;获取图像坐标系和像素坐标系的第三转换关系,根据图像坐标和第三转换关系计算得到被摄目标的像素坐标。通过此种方式,可以实时跟踪被摄目标在显示屏幕中的位置,且定位较为准确。

Description

一种视频监控的定位方法及视频监控系统 【技术领域】
本申请涉及视频技术领域,特别是涉及一种视频监控的定位方法及视频监控系统。
【背景技术】
在指挥调度的视频控制画面中,有一种视频标签效果,可以标记视频中的位置,并且可以随着摄像机的旋转而移动。在高空球机产生的视频流中加入视频标签,如果球机进行旋转或者倍率放大缩小,视频标签也需要跟随球机的变化进行相应计算而移动。
本申请的发明人在长期的研发过程中发现,相关技术中,利用球面极坐标原理实时计算出目标物体在屏幕中的位置,即将真实的点映射到以摄像机为圆心的球面极坐标系上,进行2D-3D转换,当摄像机旋转、变焦后,再由球面极坐标转换成2D屏幕坐标,在屏幕上显示。此方法只能计算手动在屏幕上添加的视频标签的计算,具有一定的局限性。
【发明内容】
本申请主要解决的技术问题是提供一种视频监控的定位方法及视频监控系统,能够实时跟踪被摄目标在显示屏中的位置。
为解决上述技术问题,本申请采用的一个技术方案是:提供一种视频监控的定位方法,应用于视频监控系统,视频监控系统包括摄像机,摄像机用于拍摄被摄目标,该方法包括:获取被摄目标的三维坐标;根据三维坐标,计算得到被摄目标在世界坐标系中的世界坐标;获取世界坐标系与相机坐标系的第一转换关系,根据世界坐标和第一转换关系计算得到被摄目标的相机坐标;获取相机坐标系和图像坐标系的第二转换关系,根据相机坐标和第二转换关系计算得到被摄目标的图像坐标;获取相机坐标系和像素坐标系的第三转换关系,根据图像坐标和第三转换关系计算得到被摄目标的像素坐标。
进一步地,根据三维坐标,计算得到被摄目标在世界坐标系中的世界坐标的步骤包括:将摄像机和被摄目标的高度差设置为预设值,三维坐标包括水平偏移角和垂直偏移角,并根据水平偏移角和垂直偏移角计算得到世界坐标,世界 坐标满足以下公式:
Figure PCTCN2019130604-appb-000001
Figure PCTCN2019130604-appb-000002
z w=-1,
其中,高度差为1,α为水平偏移角、β为垂直偏移角(x w,y w,z w)为世界坐标。
进一步地,第一转换关系满足以下公式:
Figure PCTCN2019130604-appb-000003
其中,(x c,y c,z c)为相机坐标,(x w,y w,z w)为世界坐标,
Figure PCTCN2019130604-appb-000004
θ=270°-β,ω=90°-α。
进一步地,第二转换关系满足以下公式:
Figure PCTCN2019130604-appb-000005
Figure PCTCN2019130604-appb-000006
其中,F p和F t为摄像机的镜头焦距,(x i,y i)为图像坐标,(x c,y c,z c)为相机坐标。
进一步地,第三转换关系满足以下公式:
Figure PCTCN2019130604-appb-000007
Figure PCTCN2019130604-appb-000008
其中,(x i,y i)为图像坐标,(dx,dy)为像素坐标,WIDTH、HEIGHT分别为摄像机的成像画面的宽和高。
进一步地,镜头焦距的计算公式为:
Figure PCTCN2019130604-appb-000009
Figure PCTCN2019130604-appb-000010
其中v为摄像机的水平视角、u为摄像机的垂直视场角,WIDTH、HEIGHT分别为摄像 机的成像画面的宽和高。
进一步地,获取被摄目标的三维坐标的步骤包括:
根据摄像机的经纬度、被摄目标的经纬度以及摄像机与被摄目标的高度差,计算得到三维坐标;三维坐标包括水平偏移角和垂直偏移角;水平偏移角和垂直偏移角满足以下公式:
Figure PCTCN2019130604-appb-000011
β=arctan(dist/H),
ρ=arccos(cos(90-B w)×cos(90-A w)+sin(90-B w)×sin(90-A w)×cos(B j-A j)),
其中,α为水平偏移角、β为垂直偏移角,(A j,A w)为摄像机的经度和纬度,(B j,B w)为被摄目标的经度和纬度,H为高度差,dist为被摄目标与摄像机之间的距离。
进一步地,被摄目标与摄像机之间的距离满足以下公式:
Figure PCTCN2019130604-appb-000012
其中,dist为被摄目标与摄像机之间的距离,r为地球赤道半径。
进一步地,获取相机坐标系和像素坐标系的第三转换关系,根据图像坐标和第三转换关系计算得到被摄目标的像素坐标的步骤之后还包括:在像素坐标系中的像素坐标(d x,d y)处显示预设的标签。
进一步地,该方法还包括:以摄像机为原点建立世界坐标系和相机坐标系,基于摄像机的成像画面建立图像坐标系和像素坐标系。
进一步地,获取被摄目标的三维坐标的步骤包括:获取被摄目标的第一像素坐标;获取第一像素坐标系和第一图像坐标系之间的第四转换关系,根据第一像素坐标和第四转换关系,得到被摄目标第一图像坐标;获取第一图像坐标系和第一相机坐标系的第五转换关系,根据第一图像坐标和第五转换关系,得到被摄目标第一相机坐标;获取第一相机坐标系和第一世界坐标系的第六转换关系,根据第一相机坐标和第六转换关系,得到被摄目标第一世界坐标;根据第一世界坐标,得到被摄目标的三维坐标。
进一步地,第四转换关系满足以下公式:
x i1=d x1-WIDTH/2
y i1=d y1-HEIGHT/2,
其中(d x1,d y1)为第一像素坐标,(x i1,y i1)为第一图像坐标,WIDTH、HEIGHT分别为摄像机的成像画面的宽和高。
进一步地,第五转换关系满足以下公式:
Figure PCTCN2019130604-appb-000013
其中,(x c1,y c1,z c1)为第一相机坐标,F p1和F t1为摄像机的镜头焦距。
进一步地,第六转换关系满足以下公式:
Figure PCTCN2019130604-appb-000014
其中,(x w1,y w1,z w1)为第一世界坐标,
φ 1=90°+β,ψ 1=270°+α,(α、β)为被摄目标的三维坐标。
进一步地,根据第一世界坐标,得到被摄目标的三维坐标的步骤包括:
被摄目标的三维坐标满足以下公式:α=arctan(y w1/x w1)和
Figure PCTCN2019130604-appb-000015
为解决上述技术问题,本申请采用的另一个技术方案是:提供一种视频监控系统,该视频系统包括摄像机、显示装置以及处理器,摄像机用于拍摄被摄目标,显示装置用于显示摄像机拍摄的被摄目标的图像,处理器用于实现如上述任一实施例的方法。
本申请实施例的有益效果是:本申请视频监控的定位方法,应用于视频监控系统,该方法通过获取被摄目标的三维坐标,根据三维坐标计算出被摄目标的世界坐标,根据被摄目标的世界坐标计算得到其相机坐标,根据被摄目标的相机坐标计算得到其图像坐标,最后再根据被摄目标的图像坐标计算得到其像素坐标。计算出了被摄目标的像素坐标即得到了被摄目标在显示屏中的位置。通过此种方式不仅可以计算出固定目标在显示屏中的实时位置,还可计算出移动目标在显示屏中的实时位置,且定位精度较高,应用范围较广。
【附图说明】
图1是本申请视频监控的定位方法一实施方式的流程示意图;
图2是图1的S11中世界坐标系的坐标图;
图3是图1的S13中相机坐标系的坐标图;
图4是图1的S13中世界坐标系的旋转坐标图;
图5是图1的S14中图像坐标系的坐标图;
图6是图1的S15中像素坐标系的坐标图;
图7是图1中S11的另一实施例的流程示意图;
图8是本申请视频监控系统一实施方式的结构示意图。
【具体实施方式】
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为了能够实时跟踪被摄目标在屏幕显示画面中位置,本申请提供了一种视频监控的定位方法,该方法计算过程较简单,且定位精度较高。本申请提供的方法可应用于视频监控系统,该视频监控系统包括有摄像机,摄像机用于拍摄被摄目标。当需要在视频画面中添加标签时,也可通过该计算方法,让添加的标签跟随被摄目标进行移动。
请参阅图1,图1是本申请视频监控的定位方法一实施方式的流程示意图,该方法具体包括:
S11:获取被摄目标的三维坐标。
获取被摄目标的三维坐标,该被摄目标可以为需要添加标签的跟踪对象。且被摄目标可以为移动的目标,比如马路上的小车,行走的人等,还可以为在显示屏幕上的固定目标,比如一栋大楼,一条马路等。
选定被摄目标后,首先建立世界坐标系。如图2所示,本实施例中,世界坐标系以摄像机所在的位置为原点O w,z w轴垂直水平面向上,x w轴为水平正北方向,y w轴为水平正西方向。在其他实施例中,世界坐标系还可选择本领域技术人员容易理解的其他方向。
在一个具体的实施例中,被摄目标的三维坐标包括水平偏移角α和垂直偏移角β,水平偏移角α为被摄目标和摄像机的连线与正北方向的夹角,垂直偏移 角β为被摄目标和摄像机的连线与地面的夹角。
本实施例中,可根据被摄目标的GPS信息得到被摄目标的三维坐标。具体地可根据摄像机的经纬度、被摄目标的经纬度以及摄像机与被摄目标的高度差,计算得到被摄目标的三维坐标(α,β)。
被摄目标的三维坐标满足以下公式:
Figure PCTCN2019130604-appb-000016
β=arctan(dist/H),
ρ=arccos(cos(90-B w)×cos(90-A w)+sin(90-B w)×sin(90-A w)×cos(B j-A j)),
其中,(A j,A w)为摄像机的经度和纬度,(B j,B w)为被摄目标的经度和纬度,H为摄像机与被摄目标的高度差,dist为被摄目标与摄像机之间的距离。在一个具体的实施例中,被摄目标与摄像机之间的距离dist满足以下公式:
Figure PCTCN2019130604-appb-000017
本实施例中的摄像机的经纬度信息和被摄目标的经纬度信息可以从对应的GPS系统中获得,比如可以从移动终端、对讲机、汽车等各种GPS数据源中获取到。
S12:根据三维坐标,计算得到被摄目标在世界坐标系中的世界坐标。
获取到被摄目标的三维坐标后,可根据三维坐标计算得到被摄目标的世界坐标。由于在具体计算被摄目标的世界坐标时,只需知道坐标间的比例即可,而不需要知道精确的坐标值。因此,计算时可将被摄目标与摄像机的高度差设置为预设值,比如可直接投设为1,然后再根据水平偏移角和垂直偏移角计算得到被摄目标的世界坐标,在一个具体的实施例中,世界坐标满足以下公式:
Figure PCTCN2019130604-appb-000018
Figure PCTCN2019130604-appb-000019
z w=-1
其中,被摄目标与摄像机的高度差为1,(x w,y w,z w)被摄目标在世界坐标系中的世界坐标。
S13:获取世界坐标系与相机坐标系的第一转换关系,根据世界坐标和第一 转换关系计算得到被摄目标的相机坐标。
获取到被摄目标的世界坐标后,可根据世界坐标系和相机坐标系的第一转换关系计算得到被摄目标对应的相机坐标。具体地,本实施例中,建立相机坐标系,如图3所示,相机坐标系以摄像机为原点O c,摄像机的主光轴方向为z c轴,y c轴为平面z wO cz c(过z w和z c的平面)与过原点且垂直z c的平面的交线,y c轴的方向与z w成锐角,再采用左手定则确定x c轴的方向。在其他实施例中,相机坐标系还可选择本领域技术人员容易理解的其他方向。
从世界坐标系变换到相机坐标系属于刚体变换:即被摄目标不会发生形变,只需要进行旋转和平移。而世界坐标系与相机坐标系的原点相同,所以只进行旋转即可。由图3中的几何关系可知,x c在x wO wy w平面中。因此,将世界坐标系旋转两次即可与相机坐标系重合:第一步,将世界坐标系绕z w轴逆时针旋转ω,使x w与x c重合,记此时的坐标系为(x w0,y w0,z w0);第二步,将坐标系(x w0,y w0,z w0)绕x w逆时针旋转θ,使z w与z c重合,这时y w与y c也自动重合,旋转完成,旋转效果如图4所示。
在一个具体的实施例中,世界坐标系与相机坐标系的第一转换关系满足以下公式:
Figure PCTCN2019130604-appb-000020
Figure PCTCN2019130604-appb-000021
θ=270°-β,ω=90°-α,
其中,(x c,y c,z c)为被摄目标在相机坐标系中的相机坐标。
将获取到世界坐标代入到上述公式即可获得被摄目标在相机坐标系中的相机坐标。
S14:获取相机坐标系和图像坐标系的第二转换关系,根据相机坐标和第二转换关系计算得到被摄目标的图像坐标。
获取到被摄目标的相机坐标后,可根据相机坐标系与图像坐标系的第二转换关系计算得到被摄目标的图像坐标。在一个具体的实施例中,建立图像坐标系,图像坐标系基于摄像机的成像画面,以成像画面中心为原点o i,x i轴水平向右,y i轴垂直向下,如图5所示。在其他实施例中,图像坐标系还可选择本 领域技术人员容易理解的其他方向。
由相机成像原理可以推导出相机坐标与图像坐标的比例关系,相机坐标系和图像坐标系的第二转换关系满足以下公式:
Figure PCTCN2019130604-appb-000022
Figure PCTCN2019130604-appb-000023
其中,F p和F t为摄像机的镜头焦距,(x i,y i)为图像坐标。
镜头焦距F p和F t满足以下公式:
Figure PCTCN2019130604-appb-000024
Figure PCTCN2019130604-appb-000025
其中v为摄像机的水平视场角、u为摄像机的垂直视场角。其中,摄像机的水平视场角v和垂直视场角u可通过在摄像机的相关参数中查表获得。
S15:获取图像坐标系和像素坐标系的第三转换关系,根据图像坐标和第三转换关系计算得到被摄目标的像素坐标。
获取到被摄目标的图像坐标后,根据图像坐标系和像素坐标系的第三转换关系计算得到被摄目标的像素坐标。具体地,建立像素坐标系,像素坐标系基于摄像机的成像画面。本实施例中,如图6所示,像素坐标系以成像画面左上角原点O,dx轴水平向右,dy轴垂直向下。在其他实施例中,像素坐标系还可选择本领域技术人员容易理解的其他方向。图像坐标系和像素坐标系的第三转换关系满足以下公式:
Figure PCTCN2019130604-appb-000026
Figure PCTCN2019130604-appb-000027
其中,(dx,dy)为被摄目标在像素坐标系中的像素坐标,WIDTH、HEIGHT分别为摄像机成像画面的宽和高。
将获取到的图像坐标代入到上述第三转换关系的公式即可得到被摄目标的像素坐标。
区别于现有技术的情况,上述实施例可通过GPS信息得出被摄目标的像素坐标,实现了被摄目标在显示屏幕上的实时定位,且定位较为准确,即本申请不仅可以计算出固定目标在显示屏中的实时位置,还可以很简单、准确地对移动目标进行实时跟踪,具有较强的实用性。当需要在被摄目标上设置标签时, 则可在对应的像素坐标上显示预设的标签,能够使添加的预设的标签跟随被摄目标移动。
本申请还提供了一种视频监控的定位方法,区别于上一实施例,如图7所示,本实施例中获取被摄目标的三维坐标的步骤包括:
S111:获取被摄目标的第一像素坐标。
建立第一像素坐标系,其中第一像素坐标系可以和上述实施例中的像素坐标系相同。第一像素坐标系也基于摄像机的成像画面建立,以成像画面左上角为原点,d x1轴水平向右,d y1轴垂直向下。
当在显示屏幕上选取某被摄目标或者手动添加标签时,此时已知被摄目标的第一像素坐标,根据第一像素坐标也可以反向推导出被摄目标相对摄像机的水平偏移角α和垂直偏移角β,(α,β)即为被摄目标的三维坐标。
S112:获取第一像素坐标系和第一图像坐标系之间的第四转换关系,根据第一像素坐标和第四转换关系,得到被摄目标第一图像坐标。
建立第一图像坐标系,第一图像坐标系可与上述实施例中的图像坐标系相同。第一图像坐标系基于摄像机的成像画面,具体可参阅图5中图像坐标系的建立,在此不再赘述。
获取到第一像素坐标系和第一图像坐标系之间的第四转换关系,然后再根据第一像素坐标和获取到的第四转换关系,得到被摄目标的第一图像坐标。
第一像素坐标系和第一图像坐标系第四转换关系满足以下公式:
x i1=d x1-WIDTH/2,
y i1=d y1-HEIGHT/2,
其中(d x1,d y1)为第一像素坐标,(x i1,y i1)为第一图像坐标,WIDTH、HEIGHT分别为摄像机的成像画面的宽和高。
S113:获取第一图像坐标系和第一相机坐标系的第五转换关系,根据第一图像坐标和第五转换关系,得到被摄目标第一相机坐标。
建立第一相机坐标系,第一相机坐标系可与上述实施例中的相机坐标系相同,具体可参与图3中的相机坐标系的建立,在此不再赘述。由于从第一相机坐标系到第一图像坐标系属于透视投影,而透视投影是一种多对一的关系,所以在将第一图像坐标(x i1,y i1)转换为第一相机坐标(x c1,y c1,z c1)时无法确定相机坐标z c1。但本实施例所要计算的是被摄目标的水平偏移角α和垂直偏移 角β,即各个坐标间的比例关系。因此,在计算第一相机坐标x c1,y c1时,可将z c1设为第一预设的值,比如可将z c1直接设为1。
获取到第一图像坐标系与第一相机的第五转换关系后,根据该第五转换关系和第一图像坐标,计算得到被摄目标的第一相机坐标。第五转换关系满足以下公式:
Figure PCTCN2019130604-appb-000028
其中,(x c1,y c1,z c1)为被摄目标在第一相机坐标系中的第一相机坐标,F p1和F t1为摄像机对应的镜头焦距。
S114:获取第一相机坐标系和第一世界坐标系的第六转换关系,根据第一相机坐标和第六转换关系,得到被摄目标第一世界坐标。
建立第一世界坐标系,第一世界坐标系的建立可参阅图2中的世界坐标系的建立。将第一相机坐标(x c1,y c1,z c1)转换为第一世界坐标(x w1,y w1,z w1)也可通过坐标系的旋转来实现,第一步,将第一相机坐标系绕X c1逆时针旋转ψ 1,使Z c1与Z w1重合;第二步,将相机坐标系绕Z c1逆时针旋转φ1,使X c1与X w1重合,这时Y c1与Y w1也自动重合,旋转完成。
获取到第一相机坐标系和第一世界坐标系的第六转换关系,根据所第一相机坐标和所获得的第六转换关系,得到被摄目标第一世界坐标,第六转换关系满足以下公式:
Figure PCTCN2019130604-appb-000029
其中,(x w1,y w1,z w1)为被摄目标在第一世界坐标系中的第一世界坐标,φ 1=90°+β,ψ 1=270°+α,(α、β)为被摄目标的三维坐标。
S115:根据第一世界坐标,得到被摄目标的三维坐标。
获取到被摄目标的第一世界坐标(x w1,y w1,z w1)后,根据该第一世界坐标(x w1,y w1,z w1),得到被摄目标的三维坐标,该三维坐标满足以下公式:
α=arctan(y w1/x w1)
Figure PCTCN2019130604-appb-000030
本实施例中,可以通过显示屏幕上某被摄目标的像素坐标,推导出该点对应的三维坐标,当摄像机转动时可根据此三维坐标计算出其在成像画面中的新的像素坐标。通过此种方式,当在屏幕上手动添加标签时,添加的标签可跟随被摄目标移动,且定位精度较高。因此,本申请提供的定位方法,计算过程简单,定位精度高,具有较广的应用范围。
请参阅图8,图8是本申请提供的视频监控系统一实施方式的结构示意图。本视频监控系统包括有包括摄像机81、显示装置82及处理器83。
摄像机81用于拍摄被摄目标,摄像机81包括高空球机、针孔摄像机等视频拍摄设备。
显示装置82与摄像机81建立连接,显示装置82用于显示摄像机81拍摄的被摄目标的图像。
处理器83用于实现上述任一实施例的视频监控的定位方法。该视频监控的定位方法具体请参阅上述实施例的附图及文字说明,在此不再赘述。
以上所述仅为本申请的实施方式,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (16)

  1. 一种视频监控的定位方法,应用于视频监控系统,所述视频监控系统包括摄像机,所述摄像机用于拍摄被摄目标,其特征在于,所述方法包括:
    获取所述被摄目标的三维坐标;
    根据所述三维坐标,计算得到所述被摄目标在世界坐标系中的世界坐标;
    获取所述世界坐标系与相机坐标系的第一转换关系,根据所述世界坐标和所述第一转换关系计算得到所述被摄目标的相机坐标;
    获取所述相机坐标系和图像坐标系的第二转换关系,根据所述相机坐标和所述第二转换关系计算得到所述被摄目标的图像坐标;
    获取所述相机坐标系和像素坐标系的第三转换关系,根据所述图像坐标和所述第三转换关系计算得到所述被摄目标的像素坐标。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述三维坐标,计算得到所述被摄目标在世界坐标系中的世界坐标的步骤包括:
    将所述摄像机和所述被摄目标的高度差设置为预设值,所述三维坐标包括水平偏移角和垂直偏移角,并根据所述水平偏移角和所述垂直偏移角计算得到所述世界坐标,所述世界坐标满足以下公式:
    Figure PCTCN2019130604-appb-100001
    Figure PCTCN2019130604-appb-100002
    z w=-1,
    其中,所述高度差为1,α为所述水平偏移角、β为所述垂直偏移角(x w,y w,z w)为所述世界坐标。
  3. 根据权利要求1所述的方法,其特征在于,所述第一转换关系满足以下公式:
    Figure PCTCN2019130604-appb-100003
    其中,(x c,y c,z c)为所述相机坐标,(x w,y w,z w)为所述世界坐标,
    Figure PCTCN2019130604-appb-100004
    θ=270°-β,ω=90°-α。
  4. 根据权利要求1所述的方法,其特征在于,所述第二转换关系满足以下公式:
    Figure PCTCN2019130604-appb-100005
    Figure PCTCN2019130604-appb-100006
    其中,F p和F t为所述摄像机的镜头焦距,(x i,y i)为所述图像坐标,(x c,y c,z c)为所述相机坐标。
  5. 根据权利要求1所述的方法,其特征在于,所述第三转换关系满足以下公式:
    Figure PCTCN2019130604-appb-100007
    Figure PCTCN2019130604-appb-100008
    其中,(x i,y i)为所述图像坐标,(dx,dy)为所述像素坐标,WIDTH、HEIGHT分别为所述摄像机的成像画面的宽和高。
  6. 根据权利要求4所述的方法,其特征在于,所述镜头焦距的计算公式为:
    Figure PCTCN2019130604-appb-100009
    Figure PCTCN2019130604-appb-100010
    其中v为所述摄像机的水平视角、u为所述摄像机的垂直视场角,WIDTH、HEIGHT分别为所述摄像机的成像画面的宽和高。
  7. 根据权利要求1所述的方法,其特征在于,所述获取所述被摄目标的三维坐标的步骤包括:
    根据所述摄像机的经纬度、所述被摄目标的经纬度以及所述摄像机与所述被摄目标的高度差,计算得到所述三维坐标;
    所述三维坐标包括水平偏移角和垂直偏移角;所述水平偏移角和所述垂直偏移角满足以下公式:
    Figure PCTCN2019130604-appb-100011
    β=arctan(dist/H),
    ρ=arccos(cos(90-B w)×cos(90-A w)+sin(90-B w)×sin(90-A w)×cos(B j-A j)),
    其中,α为所述水平偏移角、β为所述垂直偏移角,(A j,A w)为所述摄像机的经度和纬度,(B j,B w)为所述被摄目标的经度和纬度,H为所述高度差,dist为所述被摄目标与所述摄像机之间的距离。
  8. 根据权利要求7所述的方法,其特征在于,所述被摄目标与所述摄像机之间的距离满足以下公式:
    Figure PCTCN2019130604-appb-100012
    其中,dist为所述被摄目标与所述摄像机之间的距离,r为地球赤道半径。
  9. 根据权利要求1所述的方法,其特征在于,所述获取所述相机坐标系和像素坐标系的第三转换关系,根据所述图像坐标和所述第三转换关系计算得到所述被摄目标的像素坐标的步骤之后还包括:
    在所述像素坐标系中的所述像素坐标(d x,d y)处显示预设的标签。
  10. 根据权利要求1所述的方法,其特征在于,所述方法还包括:以所述摄像机为原点建立所述世界坐标系和所述相机坐标系,基于所述摄像机的成像画面建立所述图像坐标系和所述像素坐标系。
  11. 根据权利要求1所述的方法,其特征在于,所述获取被摄目标的三维坐标的步骤包括:
    获取所述被摄目标的第一像素坐标;
    获取第一像素坐标系和第一图像坐标系之间的第四转换关系,根据所述第一像素坐标和所述第四转换关系,得到所述被摄目标所述第一图像坐标;
    获取第一图像坐标系和所述第一相机坐标系的第五转换关系,根据所述第一图像坐标和所述第五转换关系,得到所述被摄目标所述第一相机坐标;
    获取第一相机坐标系和所述第一世界坐标系的第六转换关系,根据所述第一相机坐标和所述第六转换关系,得到被摄目标所述第一世界坐标;
    根据所述第一世界坐标,得到所述被摄目标的所述三维坐标。
  12. 根据权利要求11所述的方法,其特征在于,所述第四转换关系满足以下公式:
    x i1=d x1-WIDTH/2,
    y i1=d y1-HEIGHT/2,
    其中(d x1,d y1)为所述第一像素坐标,(x i1,y i1)为所述第一图像坐标,WIDTH、HEIGHT分别为所述摄像机的成像画面的宽和高。
  13. 根据权利要求12所述的方法,其特征在于,所述第五转换关系满足以下公式:
    Figure PCTCN2019130604-appb-100013
    其中,(x c1,y c1,z c1)为所述第一相机坐标,F p1和F t1为所述摄像机的镜头焦距。
  14. 根据权利要求13所述的方法,其特征在于,所述第六转换关系满足以下公式:
    Figure PCTCN2019130604-appb-100014
    其中,(x w1,y w1,z w1)为所述第一世界坐标,
    φ 1=90°+β,ψ 1=270°+α,(α、β)为所述被摄目标的三维坐标。
  15. 根据权利要求14所述的方法,其特征在于,根据所述第一世界坐标,得到所述被摄目标的所述三维坐标的步骤包括:
    所述被摄目标的所述三维坐标满足以下公式:α=arctan(y w1/x w1)和
    Figure PCTCN2019130604-appb-100015
  16. 一种视频监控系统,其特征在于,所述视频系统包括摄像机、显示装置以及处理器,所述摄像机用于拍摄被摄目标,所述显示装置用于显示所述摄像机拍摄的所述被摄目标的图像,所述处理器用于实现如权利要求1-15任一项所述的方法。
PCT/CN2019/130604 2019-12-31 2019-12-31 一种视频监控的定位方法及视频监控系统 WO2021134507A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130604 WO2021134507A1 (zh) 2019-12-31 2019-12-31 一种视频监控的定位方法及视频监控系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130604 WO2021134507A1 (zh) 2019-12-31 2019-12-31 一种视频监控的定位方法及视频监控系统

Publications (1)

Publication Number Publication Date
WO2021134507A1 true WO2021134507A1 (zh) 2021-07-08

Family

ID=76686319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130604 WO2021134507A1 (zh) 2019-12-31 2019-12-31 一种视频监控的定位方法及视频监控系统

Country Status (1)

Country Link
WO (1) WO2021134507A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003346132A (ja) * 2002-05-28 2003-12-05 Toshiba Corp 座標変換装置、座標変換プログラム
CN103747207A (zh) * 2013-12-11 2014-04-23 深圳先进技术研究院 基于视频监控网络的定位与追踪方法
CN105303549A (zh) * 2015-06-29 2016-02-03 北京格灵深瞳信息技术有限公司 一种确定视频图像中被测对象间位置关系的方法及装置
CN106648360A (zh) * 2016-11-30 2017-05-10 深圳市泛海三江科技发展有限公司 一种3d球机的定位方法及装置
CN109242779A (zh) * 2018-07-25 2019-01-18 北京中科慧眼科技有限公司 一种相机成像模型的构建方法、装置及汽车自动驾驶系统
CN109961485A (zh) * 2019-03-05 2019-07-02 南京理工大学 一种基于单目视觉进行目标定位的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003346132A (ja) * 2002-05-28 2003-12-05 Toshiba Corp 座標変換装置、座標変換プログラム
CN103747207A (zh) * 2013-12-11 2014-04-23 深圳先进技术研究院 基于视频监控网络的定位与追踪方法
CN105303549A (zh) * 2015-06-29 2016-02-03 北京格灵深瞳信息技术有限公司 一种确定视频图像中被测对象间位置关系的方法及装置
CN106648360A (zh) * 2016-11-30 2017-05-10 深圳市泛海三江科技发展有限公司 一种3d球机的定位方法及装置
CN109242779A (zh) * 2018-07-25 2019-01-18 北京中科慧眼科技有限公司 一种相机成像模型的构建方法、装置及汽车自动驾驶系统
CN109961485A (zh) * 2019-03-05 2019-07-02 南京理工大学 一种基于单目视觉进行目标定位的方法

Similar Documents

Publication Publication Date Title
CN111199560B (zh) 一种视频监控的定位方法及视频监控系统
CN112894832B (zh) 三维建模方法、装置、电子设备和存储介质
US9083859B2 (en) System and method for determining geo-location(s) in images
CN108932051B (zh) 增强现实图像处理方法、装置及存储介质
CN102902884A (zh) 云台摄像机自动定位角度计算方法
CN104284155B (zh) 视频图像信息标注方法及装置
CN109523471A (zh) 一种地面坐标和广角摄像机画面坐标的转换方法、系统以及装置
CN108154558B (zh) 一种增强现实方法、装置和系统
CN107103056B (zh) 一种基于局部标识的双目视觉室内定位数据库建立方法及定位方法
Murali et al. Smartphone-based crosswalk detection and localization for visually impaired pedestrians
Honkamaa et al. Interactive outdoor mobile augmentation using markerless tracking and GPS
CN110298924A (zh) 一种ar系统中用于显示检测信息的坐标变换方法
TWI444593B (zh) 地面目標定位系統與方法
CN106127115A (zh) 一种基于全景和常规视觉的混合视觉目标定位方法
CN109034104A (zh) 一种场景标签定位方法以及装置
CN111815672A (zh) 动态跟踪控制方法、装置及控制设备
CN109883433A (zh) 基于360度全景视图的结构化环境中车辆定位方法
WO2020211593A1 (zh) 交通道路的数字化重建方法、装置和系统
CN112422653A (zh) 基于位置服务的场景信息推送方法、系统、存储介质及设备
CN113296133A (zh) 一种基于双目视觉测量与高精度定位融合技术实现位置标定的装置及方法
WO2022052409A1 (zh) 用于多机位摄像的自动控制方法和系统
KR102389762B1 (ko) 디지털 트윈 연동 증강현실 카메라를 통한 공간 형성 및 인식 시스템 및 그 방법
JP5769149B2 (ja) モバイルマッピングシステム、及びこれを用いた沿道対象物の計測方法と、位置特定プログラム
Junejo et al. Autoconfiguration of a dynamic nonoverlapping camera network
CN106023066A (zh) 4路钻孔孔壁视频的柱面全景图生成方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958144

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19958144

Country of ref document: EP

Kind code of ref document: A1