CN116805277B - Video monitoring target node pixel coordinate conversion method and system - Google Patents

Video monitoring target node pixel coordinate conversion method and system Download PDF

Info

Publication number
CN116805277B
CN116805277B CN202311042801.XA CN202311042801A CN116805277B CN 116805277 B CN116805277 B CN 116805277B CN 202311042801 A CN202311042801 A CN 202311042801A CN 116805277 B CN116805277 B CN 116805277B
Authority
CN
China
Prior art keywords
camera
target node
azimuth angle
pixel
relative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311042801.XA
Other languages
Chinese (zh)
Other versions
CN116805277A (en
Inventor
姜孝兵
谢刚
张煜辉
魏延峰
刘泽泉
凌海锋
李曼玉
何洋洋
苏俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Geospace Information Technology Co ltd
Original Assignee
Geospace Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Geospace Information Technology Co ltd filed Critical Geospace Information Technology Co ltd
Priority to CN202311042801.XA priority Critical patent/CN116805277B/en
Publication of CN116805277A publication Critical patent/CN116805277A/en
Application granted granted Critical
Publication of CN116805277B publication Critical patent/CN116805277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a method and a system for converting pixel coordinates of a video monitoring target node, wherein the method comprises the following steps: acquiring position information of a camera relative to the earth, and acquiring geographic coordinates of a target node in a reference image spot in a monitoring range; according to the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference image spot in the monitoring range, calculating the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera; and calculating the pixel ordinate and the pixel abscissa of the target node in the video picture image of the camera according to the position information of the camera and the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera. According to the invention, under video monitoring, the pixel coordinates of the actual reference ground feature in the monitoring range are calculated through the large ground plane coordinates, the height difference, the vertical azimuth angle, the horizontal azimuth angle, the vertical view angle and the horizontal view angle of the camera and are displayed in a video picture, so that an AR augmented reality function is given to the camera.

Description

Video monitoring target node pixel coordinate conversion method and system
Technical Field
The invention relates to the field of natural resource investigation and monitoring, in particular to a method and a system for converting pixel coordinates of a video monitoring target node.
Background
With the development of the natural resource investigation and monitoring industry, the monitoring requirements of people on natural resources are increasingly improved, the requirement of daily cameras for monitoring video content is not limited to video pictures, and the characteristics of monitoring targets are displayed in a diversified mode. For example, if a building exists in the monitoring area, the situation of the building occupation area needs to be further known, and the situation of the building occupation area is drawn on the video picture to achieve the effect of AR augmented reality, which needs to convert the geographic coordinates of the target ground object into pixel coordinates in the image.
Disclosure of Invention
The invention provides a method and a device for converting pixel coordinates of a video monitoring target node aiming at the technical problems existing in the prior art.
According to a first aspect of the present invention, there is provided a video surveillance target node pixel coordinate conversion method, including:
acquiring position information of a camera relative to the earth, and acquiring geographic coordinates of a target node in a reference image spot in a monitoring range;
according to the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference image spot in the monitoring range, calculating the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera;
and calculating the pixel ordinate and the pixel abscissa of the target node in the video picture image of the camera according to the position information of the camera and the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera.
According to a second aspect of the present invention, there is provided a video surveillance target node pixel coordinate conversion system, comprising:
the acquisition module is used for acquiring the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference image spot in the monitoring range;
the first calculation module is used for calculating the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera according to the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference image spot in the monitoring range;
and the second calculation module is used for calculating the pixel ordinate and the pixel abscissa of the target node in the video picture image of the camera according to the position information of the camera and the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera.
According to the method and the system for converting the pixel coordinates of the video monitoring target node, the pixel coordinates of the actual reference ground object in the monitoring range are calculated through the large ground plane coordinates, the height difference, the vertical azimuth angle, the horizontal azimuth angle, the vertical view angle and the horizontal view angle of the camera under video monitoring and are displayed in a video picture, so that an AR augmented reality function is given to the camera.
Drawings
FIG. 1 is a flowchart of a method for converting pixel coordinates of a video monitoring target node;
FIG. 2 is a schematic view of a camera;
FIG. 3 is a schematic view of a vertical section of an image;
FIG. 4 is a schematic view of camera vertical orientation imaging;
FIG. 5 is a schematic view of a camera in a vertical orientation imaging section;
FIG. 6 is a schematic view of a vertical section of a camera;
fig. 7 is a schematic structural diagram of a pixel coordinate conversion system of a video monitoring target node according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention. In addition, the technical features of each embodiment or the single embodiment provided by the invention can be combined with each other at will to form a feasible technical scheme, and the combination is not limited by the sequence of steps and/or the structural composition mode, but is necessarily based on the fact that a person of ordinary skill in the art can realize the combination, and when the technical scheme is contradictory or can not realize, the combination of the technical scheme is not considered to exist and is not within the protection scope of the invention claimed.
Based on the problems in the background technology, the invention provides a video monitoring target node pixel coordinate conversion method, which combines a GIS technology and a pixel coordinate conversion technology to directly display historical evolution pattern spots of a monitoring area on a video picture.
Fig. 1 is a flowchart of a method for converting pixel coordinates of a video monitoring target node according to the present invention, where, as shown in fig. 1, the method includes:
and step 1, acquiring the position information of the camera relative to the earth, and acquiring the geographic coordinates of the target node in the reference map spot in the monitoring range.
It is understood that the camera is installed and leveled, the camera is installed on the iron tower or the video rod, the air bubble is centered by using the leveling pipe, and the camera is ensured to be parallel to the ground level.
The camera is installedThen, the position information of the camera relative to the earth is acquired, wherein the position information of the camera relative to the earth is that,/>For the large ground plane coordinates where the camera is located, < >>Is the elevation coordinate of the camera from the ground. Wherein the large ground plane coordinates where the camera is located are determined using RTK>Difference in elevation from ground>
The method comprises the steps of acquiring geographic coordinates of reference image spots in a monitoring range while acquiring position information of a camera, wherein the geographic coordinates of the reference image spots in a monitoring area are acquired from reference GIS data through space inquiry, and the geographic coordinates P of each target node in the reference image spots are acquired, wherein the coordinates are
And 2, calculating the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera according to the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference image spot in the monitoring range.
And 3, calculating the pixel ordinate and the pixel abscissa of the target node in the video picture image of the camera according to the position information of the camera and the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera.
It will be appreciated that the vertical azimuth angle of the target node P relative to the camera can be calculated with reference to the camera imaging schematic diagrams shown in fig. 2 and 3
Calculating the vertical azimuth angle of the geographic coordinates of the target node relative to the camera:
(1);
(2);
(3);
in the method, in the process of the invention,for the geographical coordinates and altitude of the camera, < +.>For the geographical coordinates of the target point>For the horizontal distance of the target point from the camera, +.>For the height of the camera from the ground, +.>Is the ray OM and the ray of the camera distance under the vertical view field +.>Is included in the bearing.
Deriving a vertical azimuth of the target node relative to the camera:
(4)。
according to the vertical azimuth angle of the target node P relative to the cameraCalculation purposeThe specific steps of calculating the pixel ordinate of the target node comprise:
(1) referring to fig. 3, the minimum and maximum angles of the vertical field of view at the current pose of the camera are calculated:
(5);
(6);
in the method, in the process of the invention,is the ray OM and the ray of the camera distance under the vertical view field +.>Angle of (1)>Is the ray ON the vertical view and the ray ON the camera distance>Angle of (1)>For the vertical azimuth angle of the current central ray of the camera, < +.>Is the half angle of the current vertical view angle of the camera.
(2) Referring to fig. 4 and 5, which are a schematic view and a sectional view of the vertical imaging of the camera, the imaging angle of the camera can be obtained according to the angle result deduced in (1) above:
(7);
(8);
(9);
(10);
(11);
deriving from the tangent function:
(12);
(13)。
according to the formula, it follows:
(14);
in the method, in the process of the invention,for pixel ordinate calculated from geographical coordinates, +.>Is half the height of the video picture, +.>Is half angle of vertical viewing angle, +.>Is the current vertical azimuth of the camera.
Let the pixel height of the camera image be 2H, calculateThe pixel coordinate in the y-direction is +.>Length of (2):
(15);
in the method, in the process of the invention,for pixel ordinate calculated from geographical coordinates, +.>Is half the height of the video picture, +.>Is half angle of vertical viewing angle, +.>For the current vertical azimuth of the camera, < >>And (3) calculating the vertical azimuth angle of the target ground object relative to the camera according to the formula (4).
Similarly, referring to FIG. 6, for a vertical cross-section of the camera, the horizontal azimuth angle of the target node P with respect to the camera is calculated
The point P is the target point,for the point of the vertical projection of the camera to the ground, the horizontal azimuth angle +.>The method comprises the following steps:
(16);
wherein:for horizontal azimuth +.>Geographical coordinates for camera>Is the geographic coordinates of the target.
According to the horizontal azimuth angle of the target point relative to the cameraAnd the position information of the camera, calculate the pixel abscissa of the target node in the video picture image of the camera, its calculation method is the same as calculating pixel ordinate, the concrete calculation process will not be repeated and explained, the pixel abscissa of the target node in the video picture image of the camera obtained by calculating finally is:
(17);
in the method, in the process of the invention,for pixel abscissa calculated from geographical coordinates, +.>Is half the video picture width, +.>Is half angle of the current horizontal view angle of the camera, < >>For the current horizontal azimuth of the camera, < >>And (3) calculating the horizontal azimuth angle of the target ground object relative to the camera according to the formula (15).
As an embodiment, the calculating the pixel ordinate and the pixel abscissa of the target node in the video frame image of the camera further includes: converting the geographic coordinates of all target nodes in the reference image spots in the monitoring range into pixel coordinates in the video picture image of the camera; and displaying the reference image spots in the video picture images of the cameras according to the pixel coordinates in the video picture images of all the target node cameras.
It can be appreciated that the above manner is adopted to convert the geographic coordinates of all nodes with reference to the image spots into the pixel coordinates of the video picture of the video camera, and render the pixel coordinates in the video, so as to achieve the effect of augmented reality.
Fig. 7 is a block diagram of a video monitoring target node pixel coordinate conversion system according to an embodiment of the present invention, as shown in fig. 7, where the system includes an obtaining module 701, a first calculating module 702, and a second calculating module 703, where:
the acquisition module 701 is configured to acquire position information of the camera relative to the earth, and acquire geographic coordinates of a target node in the reference map spot in the monitoring range;
the first calculating module 702 is configured to calculate a vertical azimuth angle and a horizontal azimuth angle of the target node relative to the camera according to the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference map spot in the monitoring range;
the second calculating module 703 is configured to calculate a pixel ordinate and a pixel abscissa of the target node in the video frame image of the camera according to the position information of the camera, and a vertical azimuth angle and a horizontal azimuth angle of the target node relative to the camera.
The second calculating module 703 is configured to calculate, according to the position information of the camera, the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera, a pixel ordinate and a pixel abscissa of the target node in the video frame image of the camera, and further includes:
converting the geographic coordinates of all target nodes in the reference image spots in the monitoring range into pixel coordinates in the video picture image of the camera;
the system further includes a presentation module 704 for presenting the reference map spot in the camera video frame image based on pixel coordinates in all of the target node camera video frame images.
It can be understood that the video monitoring target node pixel coordinate conversion system provided by the present invention corresponds to the video monitoring target node pixel coordinate conversion method provided in the foregoing embodiments, and relevant technical features of the video monitoring target node pixel coordinate conversion system may refer to relevant technical features of a slice fusion method of a multi-source heterogeneous model, which is not described herein again.
The method and the system for converting the pixel coordinates of the video monitoring target node can simply and rapidly convert the geographic coordinates of the target ground object into the pixel coordinates. Because the monitored position and posture of the camera can be fixed, the coordinate and the elevation of the camera can be accurately measured when the camera is installed, the posture of the camera is leveled, and the coordinate is directly calculated according to the azimuth angle and the visual angle of the camera. The problem that the mobile camera is limited by field calibration is avoided, so that the operation cost is reduced, the precision loss caused by the field calibration uncertainty factor is avoided, the process is simplified, and the operation efficiency and the precision are improved. By combining the GIS technology and the pixel coordinate conversion technology, the invention can effectively solve the problem that the reference GIS data is superimposed in the video picture, and effectively enriches the expression content of the video.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and for those portions of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (4)

1. The method for converting the pixel coordinates of the video monitoring target node is characterized by comprising the following steps of:
acquiring position information of a camera relative to the earth, and acquiring geographic coordinates of a target node in a reference image spot in a monitoring range;
according to the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference image spot in the monitoring range, calculating the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera;
according to the position information of the camera, the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera, calculating the pixel ordinate and the pixel abscissa of the target node in the video picture image of the camera;
the position information of the camera relative to the earth is (x) 0 ,y 0 ,z 0 ) Wherein (x) 0 ,y 0 ) Z is the large ground plane coordinate where the camera is located 0 The elevation coordinate of the camera from the ground;
according to the relative geodetic position information of the camera and the geographic coordinates of the target node in the reference image spot in the monitoring range, the calculating of the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera comprises the following steps:
wherein t is v For the vertical azimuth angle of the target node relative to the camera, t h For horizontal azimuth angle of target node relative to camera,(x p ,y p ) Geographic coordinates of the target node;
according to the position information of the camera, the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera, the pixel ordinate and the pixel abscissa of the target node in the video picture image of the camera are calculated, and the method comprises the following steps:
wherein y is p' For the ordinate of the pixel of the target node in the video picture image of the camera, H is half the height of the video picture image, beta v Is the half angle of the vertical view angle at the current pose of the camera,is the vertical azimuth angle, t, of the current posture of the camera v A vertical azimuth angle of the target node relative to the camera;
x p' the pixel abscissa of the target node in the video picture image of the camera is that W is half of the width of the video picture image, beta h For the half angle of the current horizontal view angle of the camera,t is the current horizontal azimuth angle of the camera h Is the horizontal azimuth of the target node relative to the camera.
2. The method for converting pixel coordinates of a target node according to claim 1, wherein the calculating of pixel ordinate and pixel abscissa of the target node in the video frame image of the camera further comprises:
converting the geographic coordinates of all target nodes in the reference image spots in the monitoring range into pixel coordinates in the video picture image of the camera;
and displaying the reference image spots in the video picture images of the cameras according to the pixel coordinates in the video picture images of all the target node cameras.
3. A video surveillance target node pixel coordinate conversion system, comprising:
the acquisition module is used for acquiring the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference image spot in the monitoring range;
the first calculation module is used for calculating the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera according to the position information of the camera relative to the earth and the geographic coordinates of the target node in the reference image spot in the monitoring range;
the second calculation module is used for calculating the pixel ordinate and the pixel abscissa of the target node in the video picture image of the camera according to the position information of the camera and the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera;
the position information of the camera relative to the earth is (x) 0 ,y 0 ,z 0 ) Wherein (x) 0 ,y 0 ) Z is the large ground plane coordinate where the camera is located 0 The elevation coordinate of the camera from the ground;
according to the relative geodetic position information of the camera and the geographic coordinates of the target node in the reference image spot in the monitoring range, the calculating of the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera comprises the following steps:
in the middle of,t v For the vertical azimuth angle of the target node relative to the camera, t h For the horizontal azimuth angle of the target node relative to the camera, (x) p ,y p ) Geographic coordinates of the target node;
according to the position information of the camera, the vertical azimuth angle and the horizontal azimuth angle of the target node relative to the camera, the pixel ordinate and the pixel abscissa of the target node in the video picture image of the camera are calculated, and the method comprises the following steps:
wherein y is p' For the ordinate of the pixel of the target node in the video picture image of the camera, H is half the height of the video picture image, beta v Is the half angle of the vertical view angle at the current pose of the camera,is the vertical azimuth angle, t, of the current posture of the camera v A vertical azimuth angle of the target node relative to the camera;
x p' the pixel abscissa of the target node in the video picture image of the camera is that W is half of the width of the video picture image, beta h For the half angle of the current horizontal view angle of the camera,t is the current horizontal azimuth angle of the camera h Is the horizontal azimuth of the target node relative to the camera.
4. The system for converting pixel coordinates of a target node according to claim 3, wherein the second calculating module is configured to calculate pixel ordinate and pixel abscissa of the target node in the video frame image of the camera according to the position information of the camera, the vertical azimuth angle and the horizontal azimuth angle of the target node with respect to the camera, and further comprises:
converting the geographic coordinates of all target nodes in the reference image spots in the monitoring range into pixel coordinates in the video picture image of the camera;
the system also comprises a display module for displaying the reference image spots in the video picture images of the video cameras according to the pixel coordinates in the video picture images of all the target node cameras.
CN202311042801.XA 2023-08-18 2023-08-18 Video monitoring target node pixel coordinate conversion method and system Active CN116805277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311042801.XA CN116805277B (en) 2023-08-18 2023-08-18 Video monitoring target node pixel coordinate conversion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311042801.XA CN116805277B (en) 2023-08-18 2023-08-18 Video monitoring target node pixel coordinate conversion method and system

Publications (2)

Publication Number Publication Date
CN116805277A CN116805277A (en) 2023-09-26
CN116805277B true CN116805277B (en) 2024-01-26

Family

ID=88079608

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311042801.XA Active CN116805277B (en) 2023-08-18 2023-08-18 Video monitoring target node pixel coordinate conversion method and system

Country Status (1)

Country Link
CN (1) CN116805277B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1180324A1 (en) * 2000-08-11 2002-02-20 TD Group S.p.A. Method and apparatus for observing and determining the position of targets in geographical areas
WO2011142093A1 (en) * 2010-05-14 2011-11-17 Sony Corporation Information processing device, information processing system, and program
WO2012151777A1 (en) * 2011-05-09 2012-11-15 上海芯启电子科技有限公司 Multi-target tracking close-up shooting video monitoring system
RU2016119050A (en) * 2016-05-17 2017-11-20 Общество С Ограниченной Ответственностью "Дисикон" METHOD AND SYSTEM OF MEASURING DISTANCE TO REMOTE OBJECTS
WO2020098195A1 (en) * 2018-11-15 2020-05-22 上海埃威航空电子有限公司 Ship identity recognition method based on fusion of ais data and video data
WO2020199153A1 (en) * 2019-04-03 2020-10-08 南京泊路吉科技有限公司 Orthophoto map generation method based on panoramic map
CN113223087A (en) * 2021-07-08 2021-08-06 武大吉奥信息技术有限公司 Target object geographic coordinate positioning method and device based on video monitoring
CN114255405A (en) * 2020-09-25 2022-03-29 山东信通电子股份有限公司 Hidden danger target identification method and device
CN114998425A (en) * 2022-08-04 2022-09-02 吉奥时空信息技术股份有限公司 Target object geographic coordinate positioning method and device based on artificial intelligence
CN115546710A (en) * 2022-08-09 2022-12-30 国网湖北省电力有限公司黄龙滩水力发电厂 Method, device and equipment for locating personnel in hydraulic power plant and readable storage medium
CN116228860A (en) * 2023-02-03 2023-06-06 深圳市科思科技股份有限公司 Target geographic position prediction method, device, equipment and storage medium
CN116248991A (en) * 2022-12-07 2023-06-09 中星电子股份有限公司 Camera position adjustment method, camera position adjustment device, electronic equipment and computer readable medium
CN116342783A (en) * 2023-05-25 2023-06-27 吉奥时空信息技术股份有限公司 Live-action three-dimensional model data rendering optimization method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910210B (en) * 2017-03-03 2018-09-11 百度在线网络技术(北京)有限公司 Method and apparatus for generating image information

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1180324A1 (en) * 2000-08-11 2002-02-20 TD Group S.p.A. Method and apparatus for observing and determining the position of targets in geographical areas
WO2011142093A1 (en) * 2010-05-14 2011-11-17 Sony Corporation Information processing device, information processing system, and program
WO2012151777A1 (en) * 2011-05-09 2012-11-15 上海芯启电子科技有限公司 Multi-target tracking close-up shooting video monitoring system
RU2016119050A (en) * 2016-05-17 2017-11-20 Общество С Ограниченной Ответственностью "Дисикон" METHOD AND SYSTEM OF MEASURING DISTANCE TO REMOTE OBJECTS
WO2020098195A1 (en) * 2018-11-15 2020-05-22 上海埃威航空电子有限公司 Ship identity recognition method based on fusion of ais data and video data
WO2020199153A1 (en) * 2019-04-03 2020-10-08 南京泊路吉科技有限公司 Orthophoto map generation method based on panoramic map
CN114255405A (en) * 2020-09-25 2022-03-29 山东信通电子股份有限公司 Hidden danger target identification method and device
CN113223087A (en) * 2021-07-08 2021-08-06 武大吉奥信息技术有限公司 Target object geographic coordinate positioning method and device based on video monitoring
CN114998425A (en) * 2022-08-04 2022-09-02 吉奥时空信息技术股份有限公司 Target object geographic coordinate positioning method and device based on artificial intelligence
CN115546710A (en) * 2022-08-09 2022-12-30 国网湖北省电力有限公司黄龙滩水力发电厂 Method, device and equipment for locating personnel in hydraulic power plant and readable storage medium
CN116248991A (en) * 2022-12-07 2023-06-09 中星电子股份有限公司 Camera position adjustment method, camera position adjustment device, electronic equipment and computer readable medium
CN116228860A (en) * 2023-02-03 2023-06-06 深圳市科思科技股份有限公司 Target geographic position prediction method, device, equipment and storage medium
CN116342783A (en) * 2023-05-25 2023-06-27 吉奥时空信息技术股份有限公司 Live-action three-dimensional model data rendering optimization method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于地理坐标与视频图像关联映射的视频背景目标定位与标识方法;黄仝宇;胡斌杰;;移动通信(第01期);全文 *
基于索引矩阵快速实现遥感图像与地理坐标的匹配;乔斌;;科技促进发展(第04期);全文 *
基于计算机视觉的目标方位测量方法;孙少杰;杨晓东;;火力与指挥控制(03);全文 *

Also Published As

Publication number Publication date
CN116805277A (en) 2023-09-26

Similar Documents

Publication Publication Date Title
CN106599119B (en) Image data storage method and device
US7944547B2 (en) Method and system of generating 3D images with airborne oblique/vertical imagery, GPS/IMU data, and LIDAR elevation data
EP2401703B1 (en) System and method of indicating transition between street level images
CN108154558B (en) Augmented reality method, device and system
US10733777B2 (en) Annotation generation for an image network
WO2010061861A1 (en) Stereo matching process device, stereo matching process method, and recording medium
CN104284155A (en) Video image information labeling method and device
US20130127852A1 (en) Methods for providing 3d building information
CN113192183A (en) Real scene three-dimensional reconstruction method and system based on oblique photography and panoramic video fusion
US20170103568A1 (en) Smoothing 3d models of objects to mitigate artifacts
CN102957895A (en) Satellite map based global mosaic video monitoring display method
CN109120901B (en) Method for switching pictures among cameras
CN107590854A (en) Reservoir region three-dimensional live methods of exhibiting based on WEBGIS
KR100961719B1 (en) Method and apparatus for controlling camera position using of geographic information system
CN115439531A (en) Method and equipment for acquiring target space position information of target object
JP5669438B2 (en) Object management image generation apparatus and object management image generation program
CN117115243B (en) Building group outer facade window positioning method and device based on street view picture
WO2020051208A1 (en) Method for obtaining photogrammetric data using a layered approach
CN116805277B (en) Video monitoring target node pixel coordinate conversion method and system
CN114494563B (en) Method and device for fusion display of aerial video on digital earth
CA2704656C (en) Apparatus and method for modeling building
CN115357052A (en) Method and system for automatically exploring interest points in video picture by unmanned aerial vehicle
CN116823936B (en) Method and system for acquiring longitude and latitude by using camera screen punctuation
KR102605696B1 (en) Method and System of Estimating CCTV Camera Pose and 3D Coordinate of Mapping Object based on High Definition Map
CN116912320B (en) Positioning method and device of object elevation coordinate, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant