CN117670946A - Video target geographic position mapping method and system - Google Patents

Video target geographic position mapping method and system Download PDF

Info

Publication number
CN117670946A
CN117670946A CN202311649091.7A CN202311649091A CN117670946A CN 117670946 A CN117670946 A CN 117670946A CN 202311649091 A CN202311649091 A CN 202311649091A CN 117670946 A CN117670946 A CN 117670946A
Authority
CN
China
Prior art keywords
video
map
grid
video image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311649091.7A
Other languages
Chinese (zh)
Inventor
董锦华
任伏虎
王勇华
束飙
董奕佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xinghe Dadi Digital Technology Co ltd
Original Assignee
Beijing Xinghe Dadi Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xinghe Dadi Digital Technology Co ltd filed Critical Beijing Xinghe Dadi Digital Technology Co ltd
Priority to CN202311649091.7A priority Critical patent/CN117670946A/en
Publication of CN117670946A publication Critical patent/CN117670946A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a video target geographic position mapping method, a video target geographic position mapping system, a storage medium and a video target geographic position processor, and belongs to the technical field of video monitoring and space positioning. The method comprises the following steps: acquiring the position of video equipment; positioning the position of the video device to a coordinate point on the map; drawing grids on the map according to a preset rule, registering the video with the map, and overlapping the registered video with the map; drawing a grid over the registered video images; establishing a mapping relation between video image coordinates and map coordinates through grid coding; identifying a target in the video image; calculating according to the video image coordinate range of the target to obtain a corresponding grid coding set; and obtaining the coordinate range of the target on the map through the grid codes in the grid code set. According to the invention, the target identified from the video image is positioned on the map by establishing the mapping relation between the video position of the image identification target and the actual geographic position, so that the target can be positioned more conveniently, and the positioning precision is improved.

Description

Video target geographic position mapping method and system
Technical Field
The invention relates to the technical field of video monitoring and space positioning, in particular to a video target geographic position mapping method and system.
Background
The monitoring video can acquire information in a rich, visual and timely manner, so that the monitoring video is widely applied to various industries, and after the coupling of the monitoring video and the geographic environment, the geographic position of a video target can be judged in a large-scale and complex scene. The video space recovery and positioning method commonly used at present comprises the following steps: 1) Calculating spatial position coordinates for the identified image point coordinates; 2) Performing corresponding perspective transformation and projection transformation through three-dimensional projection transformation; 3) And (5) carrying out stereopair calculation, and restoring video content through multi-angle shooting. However, the above method has the following problems: 1) The precision is not high, and particularly the precision is lower as the lens is far; 2) Nonlinear projection is difficult to establish a unified conversion formula through a projection equation; 3) The lens parameters required by the three-dimensional space recovery are difficult to obtain, the actual relief influence is not easy to treat, and the technologies related to the three methods are too complex and are not easy to implement.
Disclosure of Invention
The embodiment of the invention aims to provide a video target geographic position mapping method, a system, a storage medium and a processor, which can improve the positioning accuracy of video content on a map, simplify the positioning difficulty and improve the positioning feasibility.
In order to achieve the above object, an embodiment of the present invention provides a method for mapping a geographic location of a video object, including: acquiring the position of video equipment; positioning the position of the video device to a coordinate point on the map; drawing grids on a map according to preset rules, wherein the rules comprise an earth subdivision grid coding rule, and each grid is provided with a unique code; registering the video image with the map, and overlapping the registered video image with the map; drawing grids on the registered video images, so that the grid coding of the video images is consistent with the grid coding of the map; establishing a mapping relation between video image coordinates and map coordinates through grid coding; identifying a target in the video image; and obtaining the coordinate range of the target on the map according to the video image coordinate range and the mapping relation of the target.
Optionally, according to the above method for mapping a geographic location of a video object, the drawing a grid on the registered video image includes: drawing a two-dimensional grid on the registered video; acquiring the height and layer number information of a three-dimensional object in a video image; calculating the height of each layer; and (5) drawing the two-dimensional grid to be higher to draw the three-dimensional grid.
Optionally, according to the above method for mapping a geographic location of a video object, registering the video with a map, and overlapping the registered video with the map, includes: selecting at least one marker in the video image; and adjusting the shooting angle of the video equipment according to the markers, and registering the video with the map.
Optionally, according to the above method for mapping a geographic location of a video object, the drawing a grid on the registered video image further includes: and at the position where the video image is distorted, establishing a distortion mapping of the video image coordinates and the map coordinates.
Optionally, according to the above method for mapping a geographic location of a video object, the establishing a mapping relationship between coordinates of a video image and coordinates of a map by grid coding includes: acquiring the height and layer number information of a three-dimensional object in a video image; performing three-dimensional grid modeling in a map; and corresponding the three-dimensional object of the video image to the three-dimensional grid modeling of the map, and obtaining the three-dimensional coordinate data of the map corresponding to the coordinates of the video image.
Optionally, according to the above method for mapping a geographic location of a video object, obtaining a coordinate range of the object on a map according to a video image coordinate range and a mapping relationship of the object includes: calculating according to the video image coordinate range of the target to obtain a corresponding grid coding set; and inquiring the mapping relation through the grid codes in the grid code set to obtain the coordinate range of the target on the map.
Optionally, according to the above method for mapping a geographic location of a video object, the establishing a mapping relationship between coordinates of a video image and coordinates of a map by grid coding includes: screen girdx = GridNum X screen X/gridcount X, where screen X represents the video coordinate length, gridcount represents the total number of grid rows, gridNum represents the nth grid in a row, screen grid represents the video X coordinate in the video coordinates of the calculated grid, N is an integer greater than or equal to 0;
screen girdy=gridnum, where screen Y represents the video coordinate width, gridcount represents the total number of actual grid columns, gridNum represents the mth grid in a column, and screen grid represents the video Y coordinate in the video coordinates of the calculated grid, where M is an integer greater than or equal to 0.
In another aspect, the present invention provides a video content locating system comprising: an acquisition module configured to acquire a location of a video apparatus; a positioning module configured to position a location of the video apparatus to a coordinate point on a map; a drawing module configured to draw grids on a map according to preset rules, wherein the rules include geostationary grid encoding rules, each grid having a unique encoding; a registration module configured to register the video with the map, the registered video overlapping the map; a rendering module further configured to render a grid on the registered video image, causing the video image grid coding to coincide with the map grid coding; a mapping module configured to establish a mapping relationship of video image coordinates and map coordinates by grid coding; an identification module configured to identify a target in a video; and the coordinate module is configured to obtain the coordinate range of the target on the map according to the video image coordinate range and the mapping relation of the target.
In yet another aspect, the present invention provides a machine-readable storage medium having stored thereon instructions for causing a machine to perform a video object geographic location mapping method as described in any one of the above applications.
In yet another aspect, the present invention provides a processor for executing a program, wherein the program is executed to perform: a method of mapping a geographical location of a video object as claimed in any one of the preceding claims.
According to the invention, through the grid technology, the mapping relation among the video position of the image recognition target, the grid code and the actual geographic position is established by adopting the video recognition technology, the target recognized from the video image is accurately shot on the corresponding position on the map to obtain the geographic position, the target can be positioned more conveniently, and the positioning precision is improved.
Additional features and advantages of embodiments of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings are included to provide a further understanding of embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain, without limitation, the embodiments of the invention. In the drawings:
figure 1 is a flow chart of a video object geographic location mapping method according to some embodiments of the present application,
figure 2 is a schematic diagram illustrating registration of video images with a map according to some embodiments of the present application,
figure 3 is a schematic diagram illustrating correspondence of a two-dimensional grid of video images with two-dimensional coordinate data of a map according to some embodiments of the present application,
figure 4 is a schematic diagram illustrating registration distortion of a map grid with a video image according to some embodiments of the present application,
figure 5 is a schematic diagram of a video image grid according to some embodiments of the present application,
fig. 6 is a block diagram illustrating a video content locating system according to some embodiments of the present application.
Detailed Description
The following describes the detailed implementation of the embodiments of the present invention with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
According to the invention, two-dimensional and/or three-dimensional grids are drawn on the map based on the GeoSOT earth subdivision grid model and the Beidou grid code standard specification, and the mapping relation of the image recognition target video position, the grid code and the actual geographic position is established by adopting the video recognition technology, so that the target recognized from the video image can be accurately shot on the corresponding position on the map to obtain the geographic position, the target can be more conveniently positioned, and the positioning precision is improved.
Fig. 1 is a flowchart illustrating a method for mapping a geographic location of a video object according to some embodiments of the present application, where the method includes the following steps:
s110, acquiring the position of video equipment; s120, positioning the position of the video equipment to a coordinate point on a map; s130, drawing grids on the map according to preset rules, wherein the rules comprise an earth subdivision grid coding rule, and each grid is provided with a unique code; s140, registering the video image with the map, and overlapping the registered video image with the map; s150, drawing grids on the registered video images, so that the grid coding of the video images is consistent with the grid coding of the map; s160, establishing a mapping relation between video image coordinates and map coordinates through grid coding; s170, identifying a target in the video image; s180, obtaining the coordinate range of the target on the map according to the video image coordinate range and the mapping relation of the target.
In a specific embodiment of the application, two-dimensional and three-dimensional grid modeling is performed on the video space through a Beidou grid code technology, the video position in the video space and the actual position in the map are associated through the Beidou grid code, intelligent identification is performed on the content in the video by means of an AI identification algorithm, and further accurate positioning of the target in an event and associated analysis with video external data are achieved.
In this embodiment of the present application, in step S110, the video device includes an electronic device capable of acquiring the device, including but not limited to a camera, a mobile phone, and a camera, and in steps S110 and S120, the video is positioned near a coordinate point of the map by acquiring a position of the radio frequency device, where a departure of a location of video capturing from the coordinate point is within a first preset range, for example, within 10 meters. In step S140, at least one marker in the video image is selected; according to the method, the shooting angle of the video equipment is adjusted according to the markers, the video image is registered with the map, specifically, the video is registered with the map by adjusting shooting parameters of the video equipment, in some embodiments of the application, the video is also dislocated within a second preset range, for example, the dislocated within 1 meter after being registered, the registering position is adjusted by fine adjustment of parameter data of the video equipment until the video is overlapped with the map, in some embodiments of the application, one or more specific markers in the video can be selected as reference points to judge whether the video is overlapped with the map, as shown in fig. 2, the screenshot at the upper left corner in fig. 2 is the screenshot of the video, the rest is the map, and the road in the screenshot of the video can be used as a reference to register with the map.
In step S150, grids at the same positions are drawn on the image of the video screenshot according to the positions corresponding to the map, the same network codes are given, and the latitude and longitude range corresponding to the image coordinates of each grid is acquired on the video image.
According to a specific embodiment of the present application, the principle of acquiring two-dimensional coordinate data of a map corresponding to a two-dimensional grid of a video is shown in fig. 3, and in fig. 3, a process of drawing the two-dimensional grid on the video is as follows:
drawing two-dimensional grids in the geographic range shot by the video equipment on a map, wherein the two-dimensional grids comprise grid codes and geographic coordinate ranges, namely two-dimensional coordinate data of the map, acquiring a screen coordinate pixel range of the video shooting space, such as 1920 x 1080, and screen corner pixel coordinates, and calculating the number of row and column grids to obtain the screen coordinate range of the screen coordinate grids, wherein a calculation formula is that
Screen girdx=gridnumx, screen x/GridCountX, wherein,
screen X represents the screen X coordinate length, gridCountX represents the actual map grid number of a row, gridNum represents the grid number of a row, and from 0, screen GridX represents the screen X coordinate of the calculated screen grid;
screen girdy=gridnumy, screen y/GridCountY, wherein,
screen Y represents the screen Y coordinate length, gridCountY represents the actual map grid number, gridNumY represents the number of grids, and from 0 on, screen GridY represents the screen Y coordinates of the calculated screen grid, in FIG. 3, A represents the video grid, B represents the map grid, where A has a grid start point of (0, 0) corresponding to B has a grid start point of (114.1342424,37.1312311), A has a grid end point of (1920,1080) corresponding to B has a grid end point of (114.2983424,37.3334321).
Fig. 4 is a schematic diagram of registration distortion between a map grid and a video image according to some embodiments of the present application, in fig. 4, a picture is a video image captured by a camera, a grid part of the video image is a corresponding map grid, distortion occurs in a distant place of the picture when the camera captures the video image, which may cause deviation between a position of a coordinate point of the video image and an actual map coordinate when mapping one by one, because the grid of the actual map position is a rectangular grid which is generally uniformly distributed, and the grid is deformed into a trapezoid or other shape grid due to the distortion when mapping onto the video image in a screen, in order to solve such a situation, a conversion relation between the map coordinate point and the video image coordinate point is constructed by a linear equation set, and a distortion mapping between coordinate points is solved, so any coordinate on the video image can be converted into a map position coordinate, and an error is small, and in a regular grid on the video image as shown in fig. 5, a position where distortion occurs can be positioned on the map coordinate according to a constructed distortion mapping relation, thereby improving accuracy of target positioning.
According to some embodiments of the present application, a method for mapping a geographic location of a video object, where the grid is drawn on the registered video, further includes:
acquiring the height and layer number information of a three-dimensional object in a video image; calculating the height of each layer; the two-dimensional grid is pulled up to draw a three-dimensional grid, wherein the height and layer number information of the three-dimensional object can be the data of the existing building.
According to some embodiments of the present application, the mapping relationship between the video image coordinates and the map coordinates is established by using grid coding, and the method further includes:
the acquired height and layer number information of the three-dimensional object; performing three-dimensional grid modeling in a map; and corresponding the three-dimensional object of the video to the three-dimensional grid modeling of the map, and obtaining three-dimensional coordinate data of the map corresponding to the three-dimensional grid coding of the video, wherein the registered video content is converted according to the mapping mode of the two-dimensional grid, and the height coordinates of the screen grid of the object are additionally calculated on the two-dimensional grid.
In some embodiments of the present application, step S180 includes: step 181, calculating the coordinate range of the video image of the target to obtain a corresponding grid coding set, wherein a target detection and identification technology based on deep learning is adopted to identify the moving target in the video, screen coordinate information of the target in the image is identified, and the corresponding grid coding set is obtained according to the screen coordinate information-grid coding mapping formula established in the previous step; step 182, obtaining the coordinate range of the target on the map by inquiring the mapping relation through the grid codes in the grid code set.
In the above embodiment, the map coordinate data and the video coordinate data corresponding to the acquired grid include a formula of mapping of screen coordinate information to grid coding, that is:
screen girdx=gridnumx, screen x/GridCountX, wherein,
screen X represents the video coordinate length, gridCountX represents the total number of grid lines, gridNumX represents the N-th grid in a line, screen GridX represents the video X coordinate in the video coordinates of the calculated grid, and N is an integer greater than or equal to 0;
screen girdy=gridnumy, screen y/GridCountY, wherein,
screen Y represents the video coordinate width, gridCountY represents the total number of actual grid columns, gridNumY represents the Mth grid in a column, screen GridY represents the video Y coordinate in the video coordinates of the calculated grid, and M is an integer greater than or equal to 0.
Fig. 6 is a block diagram of a video content positioning system according to some embodiments of the present application, as shown in fig. 6, where the system includes an acquisition module 201, a positioning module 202, a registration module 203, a rendering module 204, a mapping module 205, an identification module 206, and a coordinate module 207, where the acquisition module 201 is configured to acquire a location of a video device; a positioning module 202 configured to position a location of the video apparatus to a coordinate point on a map; a drawing module 204 configured to draw grids on a map according to preset rules, wherein the rules include an geodetic grid coding rule, each grid having a unique code; a registration module 203 configured to register the video image with the map, the registered video image overlapping the map; a rendering module 204 further configured to render a grid on the registered video image, causing the video image grid coding to coincide with the map grid coding; a mapping module 205 configured to establish a mapping relationship of video image coordinates and map coordinates by grid coding; an identification module 206 configured to identify a target in the video; a coordinate module 207 configured to obtain a coordinate range of the object on the map from the video image coordinate range and the mapping relationship of the object.
Any number of the modules, sub-modules, or at least some of the functionality of any number of the modules, sub-modules, according to embodiments of the present application, may be implemented in one module. Any one or more of the modules, sub-modules according to embodiments of the present application may be implemented as split into multiple modules. Any one or more of the modules, sub-modules, according to embodiments of the present application, may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or in any other reasonable manner of hardware or firmware that integrates or packages the circuit, or in any one of or in any suitable combination of three of software, hardware, and firmware. Alternatively, one or more of the modules, sub-modules according to embodiments of the present application may be at least partially implemented as computer program modules, which when executed, may perform the corresponding functions. For example, any of the acquisition module 201, the positioning module 202, the registration module 203, the rendering module 204, the mapping module 205, the identification module 206, and the coordinates module 207 may be combined in one module to be implemented, or any one of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module.
The embodiment of the invention provides a storage medium, on which a program is stored, which when executed by a processor, implements the video object geographic position mapping method.
The embodiment of the invention provides a processor which is used for running a program, wherein the program runs to execute the video target geographic position mapping method.
In particular, the processor may include, for example, a general purpose microprocessor, an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor may also include on-board memory for caching purposes. The processor may be a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The storage medium may include non-volatile memory, random Access Memory (RAM) and/or nonvolatile memory, etc., in the form of Read Only Memory (ROM) or flash memory (flash RAM) in a computer readable medium. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A method for mapping a geographic location of a video object, comprising:
acquiring the position of video equipment;
positioning the position of the video device to a coordinate point on the map;
drawing grids on a map according to preset rules, wherein the rules comprise an earth subdivision grid coding rule, and each grid is provided with a unique code;
registering the video image with the map, and overlapping the registered video image with the map;
drawing grids on the registered video images, so that the grid coding of the video images is consistent with the grid coding of the map;
establishing a mapping relation between video image coordinates and map coordinates through grid coding;
identifying a target in the video image;
and obtaining the coordinate range of the target on the map according to the video image coordinate range and the mapping relation of the target.
2. A method of mapping a geographic location of a video object as defined in claim 1, wherein said rendering a grid on the registered video image comprises:
drawing a two-dimensional grid on the registered video images;
acquiring the height and layer number information of a three-dimensional object in a video image;
calculating the height of each layer;
and (5) drawing the two-dimensional grid to be higher to draw the three-dimensional grid.
3. The method of claim 2, wherein registering the video image with the map, the registered video image overlapping the map, comprises:
selecting at least one marker in the video image;
and adjusting the shooting angle of the video equipment according to the markers, and registering the video image with the map.
4. A method for mapping a geographic location of a video object according to claim 3, wherein said establishing a mapping relationship between video image coordinates and map coordinates by grid coding comprises:
acquiring the height and layer number information of a three-dimensional object in a video image;
performing three-dimensional grid modeling in a map;
and corresponding the three-dimensional object of the video image to the three-dimensional grid modeling of the map, and obtaining the three-dimensional coordinate data of the map corresponding to the coordinates of the video image.
5. A method for mapping a geographic location of a video object according to claim 3, wherein obtaining a coordinate range of the object on a map from a video image coordinate range and a mapping relationship of the object comprises:
calculating according to the video image coordinate range of the target to obtain a corresponding grid coding set;
and inquiring the mapping relation through the grid codes in the grid code set to obtain the coordinate range of the target on the map.
6. The method according to claim 4, wherein the establishing a mapping relationship between the video image coordinates and the map coordinates by the grid coding comprises:
screen girdx=gridnum x screen x/GridCountX, wherein,
screen X represents the video coordinate length, gridCountX represents the total number of grid lines, gridNum represents the N-th grid in a line, screen GridX represents the video X coordinate in the video coordinates of the calculated grid, and N is an integer greater than or equal to 0;
screen girdy=gridnum x screen y/GridCountY, wherein,
screen Y represents the video coordinate width, gridCountY represents the total number of actual grid columns, gridNum represents the Mth grid in a column, screen GridY represents the video Y coordinate in the video coordinates of the calculated grids, and M is an integer greater than or equal to 0.
7. The method according to claim 6, wherein the mapping relationship between the video image coordinates and the map coordinates is established by grid coding, and further comprising:
and at the position where the video image is distorted, establishing a distortion mapping of the video image coordinates and the map coordinates.
8. A video content locating system, comprising:
an acquisition module configured to acquire a location of a video apparatus;
a positioning module configured to position a location of the video apparatus to a coordinate point on a map;
a drawing module configured to draw grids on a map according to preset rules, wherein the rules include geostationary grid encoding rules, each grid having a unique encoding;
a registration module configured to register the video image with the map, the registered video image overlapping the map;
a rendering module further configured to render a grid on the registered video image, causing the video image grid coding to coincide with the map grid coding;
a mapping module configured to establish a mapping relationship of video image coordinates and map coordinates by grid coding;
an identification module configured to identify a target in a video;
and the coordinate module is configured to obtain the coordinate range of the target on the map according to the video image coordinate range and the mapping relation of the target.
9. A machine-readable storage medium having stored thereon instructions for causing a machine to perform a video object geographical location mapping method as set forth in any one of claims 1-7 above.
10. A processor configured to execute a program, wherein the program is configured to, when executed, perform: a method of video object geographical location mapping as claimed in any one of claims 1 to 7.
CN202311649091.7A 2023-12-04 2023-12-04 Video target geographic position mapping method and system Pending CN117670946A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311649091.7A CN117670946A (en) 2023-12-04 2023-12-04 Video target geographic position mapping method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311649091.7A CN117670946A (en) 2023-12-04 2023-12-04 Video target geographic position mapping method and system

Publications (1)

Publication Number Publication Date
CN117670946A true CN117670946A (en) 2024-03-08

Family

ID=90083902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311649091.7A Pending CN117670946A (en) 2023-12-04 2023-12-04 Video target geographic position mapping method and system

Country Status (1)

Country Link
CN (1) CN117670946A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309967A (en) * 2020-01-23 2020-06-19 北京旋极伏羲科技有限公司 Video spatial information query method based on grid coding
CN112380894A (en) * 2020-09-30 2021-02-19 北京智汇云舟科技有限公司 Video overlapping area target duplicate removal method and system based on three-dimensional geographic information system
CN115222920A (en) * 2022-09-20 2022-10-21 北京智汇云舟科技有限公司 Image-based digital twin space-time knowledge graph construction method and device
CN116824457A (en) * 2023-06-30 2023-09-29 西安卓越视讯科技有限公司 Automatic listing method based on moving target in panoramic video and related device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309967A (en) * 2020-01-23 2020-06-19 北京旋极伏羲科技有限公司 Video spatial information query method based on grid coding
CN112380894A (en) * 2020-09-30 2021-02-19 北京智汇云舟科技有限公司 Video overlapping area target duplicate removal method and system based on three-dimensional geographic information system
CN115222920A (en) * 2022-09-20 2022-10-21 北京智汇云舟科技有限公司 Image-based digital twin space-time knowledge graph construction method and device
CN116824457A (en) * 2023-06-30 2023-09-29 西安卓越视讯科技有限公司 Automatic listing method based on moving target in panoramic video and related device

Similar Documents

Publication Publication Date Title
CN108235736B (en) Positioning method, cloud server, terminal, system, electronic device and computer program product
CN112184890B (en) Accurate positioning method of camera applied to electronic map and processing terminal
CN106599119B (en) Image data storage method and device
CN103747207A (en) Positioning and tracking method based on video monitor network
CN109345599B (en) Method and system for converting ground coordinates and PTZ camera coordinates
US20200162724A1 (en) System and method for camera commissioning beacons
CN111932627B (en) Marker drawing method and system
CN113139031B (en) Method and related device for generating traffic sign for automatic driving
US11372075B2 (en) Positioning method, apparatus and system, layout method of positioning system, and storage medium
CN116012428A (en) Method, device and storage medium for combining and positioning thunder and vision
CN115760999A (en) Monocular camera calibration and target geographic position extraction method based on GIS assistance
CN113284194B (en) Calibration method, device and equipment of multiple RS equipment
CN117670946A (en) Video target geographic position mapping method and system
CN111609849A (en) Multi-station rendezvous positioning method and system
CN113932793A (en) Three-dimensional coordinate positioning method and device, electronic equipment and storage medium
CN115077563A (en) Vehicle positioning accuracy evaluation method and device and electronic equipment
CN113513983A (en) Precision detection method, device, electronic equipment and medium
CN115439550A (en) Camera calibration method, distance measurement method, equipment and storage medium
CN113009533A (en) Vehicle positioning method and device based on visual SLAM and cloud server
CN113284193A (en) Calibration method, device and equipment of RS equipment
CN108491401B (en) Coordinate deviation rectifying method for 2.5-dimensional map
CN116823936B (en) Method and system for acquiring longitude and latitude by using camera screen punctuation
CN117372987B (en) Road three-dimensional data processing method and device, storage medium and electronic equipment
CN117826204A (en) Gridding processing method, device, equipment and medium for vehicle track data
CN113532390A (en) Target positioning method and device based on artificial intelligence technology and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination