CN111309967A - Video spatial information query method based on grid coding - Google Patents

Video spatial information query method based on grid coding Download PDF

Info

Publication number
CN111309967A
CN111309967A CN202010076780.3A CN202010076780A CN111309967A CN 111309967 A CN111309967 A CN 111309967A CN 202010076780 A CN202010076780 A CN 202010076780A CN 111309967 A CN111309967 A CN 111309967A
Authority
CN
China
Prior art keywords
space
video
information
grid
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010076780.3A
Other languages
Chinese (zh)
Other versions
CN111309967B (en
Inventor
黄朔
任伏虎
王林
刘越
杨辉
刘博�
蔡钰
王强宇
施若平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beidou Fuxi Information Technology Co ltd
Original Assignee
Beijing Xuanji Fuxi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xuanji Fuxi Technology Co ltd filed Critical Beijing Xuanji Fuxi Technology Co ltd
Priority to CN202010076780.3A priority Critical patent/CN111309967B/en
Publication of CN111309967A publication Critical patent/CN111309967A/en
Application granted granted Critical
Publication of CN111309967B publication Critical patent/CN111309967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Library & Information Science (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Signal Processing (AREA)
  • Instructional Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a video space information query method based on grid coding, which comprises the steps of firstly establishing a coordinate system for a video space, wherein each pixel point in a video has a corresponding space reference coordinate position; matching the registration of the video space and the real space, and then carrying out grid division and grid coding; then, the video space and the corresponding real space are established with one-to-one corresponding spatial correlation through the grid coding, the urban entity space data source information is transmitted to the video space in the form of loading information data by the grid coding, the urban entity space data source information can be displayed while the video is played, and the urban entity space data source information can be inquired through the grid coding, so that the defect that the road information, the urban underground pipe network information, the ground urban part and facility information, the building information, the correlation information in the building and the like in the video view field can not be effectively displayed and explained in the prior art is overcome.

Description

Video spatial information query method based on grid coding
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a video spatial information query method based on grid coding.
Background
In the conventional video surveillance system, address subtitles previously input on the video picture, such as XXX intersections, and associated time and date displays are generally visible on the video picture. The real-site geographic position, road information, building information, city part information and the like of the scene shot in the video field are judged by the familiarity of the viewer with the district and the seen images, or auxiliary materials and tools are additionally utilized to inquire and display the information contained in the video field. Video correlation with map real-world information by a camera is inefficient, and detailed and accurate real-world space object information of a real-world scene incident in the field of view of the video cannot be directly and accurately located and acquired on the video.
With the help of the existing image recognition and face recognition technology, the dynamic license plate and face can be recognized by means of high-definition video information; counting people flow, traffic flow and the like; however, the road information, the city underground pipe network information, the ground city component and facility information, the building information, the related information in the building, and the like in the video field cannot be effectively displayed and explained.
Disclosure of Invention
In order to solve the problems, the invention provides a video spatial information query method based on grid coding, which is used for establishing a coordinate system for a video space, utilizing spatial grid coding to be associated with a real space on the basis, calling city entity spatial data source information into a video scene, displaying related information in a video while playing the video and realizing interaction between a video pixel picture and the information.
A video spatial information query method based on grid coding comprises the following steps:
s1: registering pixel coordinates of the ground in the original video space with geographic coordinates of the ground in the solid space, and establishing a coordinate system which is the same as the solid space for the original video space;
s2: performing mesh generation and coding on all space elements in the registered video space range by using a GeoSOT (geospatial optical time) mesh coding technology to obtain a video space three-dimensional grid map;
s3: associating the video space three-dimensional grid map with the city entity space three-dimensional grid map, so that grid information of the video space and city entity space data source information are in one-to-one correspondence through grid coding;
s4: converting the video space three-dimensional grid map associated with the city entity space data source into a grid shape matched with a frame picture in an original video space to obtain a matched video space three-dimensional grid map;
s5: and displaying the matched video space three-dimensional stereo grid map, wherein each grid can display the associated urban entity space data source information.
Further, the city entity space data source information comprises road information, city underground pipe network information, ground city component and facility information, building information and related information in a building.
Further, the city entity space data source information includes the number of floors, the number of units/floors and the address of the unit of the city building.
Further, the city entity space data source information is data information related to each grid in a city entity three-dimensional grid map obtained by utilizing a GeoSOT geosynchronous grid coding technology.
Further, a video spatial information query method based on mesh coding further comprises the following steps:
gridding and carrying out grid coding on a field space according to a three-dimensional grid map of a video space;
cameras are provided for a field space such that each trellis code of the field space covered by the field of view of the camera corresponds to at least one camera.
Further, in step S1, the registering the pixel coordinates of the ground in the original video space with the geographic coordinates of the ground in the solid space includes:
s11: selecting a plurality of groups of homonymous image feature points on a remote sensing image map and a video frame picture respectively, wherein the homonymous image feature points are all positioned on the ground and cover the maximum range of the video frame picture;
s12: and for each group of homonymous image feature points, assigning the geographic coordinates of the homonymous image feature points on the remote sensing image map to the corresponding homonymous image feature points on the video frame picture so as to realize registration.
Has the advantages that:
the invention provides a video space information query method based on grid coding, which comprises the steps of firstly establishing a coordinate system for a video space, enabling each pixel point in a video to have a corresponding space geographic coordinate position, then carrying out grid division and grid coding after registering and matching the video space and a real space; then, the video space and the corresponding real space are established with one-to-one corresponding spatial correlation through the grid coding, the urban entity space data source information is transmitted to the video space in the form of loading information data by the grid coding, the urban entity space data source information can be displayed while the video is played, and the urban entity space data source information can be inquired through the grid coding, so that the defect that the road information, the urban underground pipe network information, the ground urban part and facility information, the building information, the correlation information in the building and the like in the video view field can not be effectively displayed and explained in the prior art is overcome.
Drawings
Fig. 1 is a flowchart of a method for querying spatial information of a video based on trellis coding according to the present invention;
FIG. 2 is a flow chart of the registration of video frame space and physical geospatial space provided by the present invention;
FIG. 3 is a schematic diagram of a video coordinate registration method provided by the present invention;
FIG. 4 is a transformation of a registered projection video space and an original video space provided by the present invention;
FIG. 5 is a schematic view of a video grid query page provided by the present invention;
FIG. 6 is a flowchart of a grid query of a video provided by the present invention;
FIG. 7 is a flowchart of a video grid query process when querying a thematic grid database according to the present invention;
FIG. 8 is a flowchart of querying a video (camera) with a grid according to the present invention;
FIG. 9 is a display diagram after spatial registration of video provided by the present invention;
FIG. 10 is a schematic diagram of the present invention providing a restoration of the registered video spatial grid to the original video spatial grid;
fig. 11 is a schematic diagram of a camera covering a physical geographic space according to the present invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
As shown in fig. 1, a method for querying spatial information of a video based on mesh coding includes the following steps:
s1: registering pixel coordinates of the ground in the original video space with geographic coordinates of the ground in the solid space, and establishing a coordinate system which is the same as the solid space for the original video space; the method specifically comprises the following steps:
s11: selecting a plurality of groups of homonymous image feature points on a remote sensing image map and a video frame picture respectively, wherein the homonymous image feature points are all positioned on the ground and cover the maximum range of the video frame picture;
s12: and for each group of homonymous image feature points, assigning the geographic coordinates of the homonymous image feature points on the remote sensing image map to the corresponding homonymous image feature points on the video frame picture so as to realize registration.
For example, as shown in fig. 2, a video picture space and physical geospatial registration flow chart is shown. The pixel coordinates and the solid geographic coordinates of the same-name point pictures are acquired respectively in sufficient quantity on the video pictures and the ground image base pictures, and the video pictures are positioned and registered to the correct geographic position of the ground in a projection transformation mode. After clicking a video registration menu button in a software menu area, displaying a video selection frame; after the user selects the video, the background reads the video and takes a frame to be overlaid and displayed in the upper right area of the main interface. The front end displays a prompt at the salient position to guide a user to finish the selection of the registration point, and simultaneously the rear end records the coordinate of the registration point. After selecting 4-8 groups of points, the user can confirm the selection, the rear end calculates the projection relation and then pushes the projection preview to the front end, and after the user confirms, all relevant parameters are stored in the database.
That is, in actual operation, the basic steps of video coordinate registration are:
the first step is as follows: and importing the video frame picture into the system, and displaying the video coverage area range in the remote sensing image map of the system.
The second step is that: respectively selecting homonymous image feature points on a remote sensing image map and a video frame picture of the system, wherein the homonymous points are on the ground and cover the maximum range of a video frame as much as possible; as shown in fig. 3, (a1, B1) is a group of homonymous points, and such homonymous points require a minimum of 4 groups to be selected; thus, the geographic coordinates of A1 on the remote sensing image map are associated with the video frame pixel coordinates of the same-name point B1 on the video frame picture.
The third step: and calculating the video coordinate registration. And calculating the geographic coordinates corresponding to each pixel point on the video frame picture by using the same-name point group acquired in the last step, namely establishing a coordinate system which is the same as the remote sensing image map for the video frame. The calculation method of the geographic coordinates is the prior art, and the description of the invention is omitted.
The fourth step: the transformation parameters of the video frames are stored in a video transformation parameter database, also called a "camera registration information database".
S2: and (3) meshing and coding all space elements in the registered video space range by utilizing a GeoSOT (geosynchronous orbit) meshing coding technology to obtain a video space three-dimensional grid map.
It should be noted that ① video pictures are perspective projection records of a solid three-dimensional space on a two-dimensional image plane, after the pixel coordinates of the ground in the video space are registered with the geographic coordinates of the solid ground, the pixel coordinates of the video pictures are uniformly converted into the geographic coordinates of the solid ground, and conversion parameters are recorded.
② under the condition of having a real-time three-dimensional model (such as BIM model, oblique photography three-dimensional model, DEM + white model of building, etc.) the video is projected to the real-time three-dimensional model system through the coordinate transformation relation in ①, then the view field angle, direction, distance, etc. of the three-dimensional model are continuously adjusted and transformed, so that the view field picture of the video can be overlapped with the three-dimensional model in the same range, and the angle parameter, expansion parameter, etc. of the position relation between the two are recorded, thus the video space is completely matched with the real-time three-dimensional space.
③, calculating the three-dimensional grid map of the video space range, and carrying out grid coding, after the field range of the video space is determined, collecting or collecting the space geographic elements (such as CAD graph, BIM graph, oblique photograph graph, etc.) in the range, and carrying out grid subdivision and coding on all the space elements such as plane, building, etc. in the space range by utilizing the GeoSOT (geosynchronous orbit) grid dividing and coding technology to form the three-dimensional grid map.
S3: and associating the video space three-dimensional grid graph with the urban entity space data source, so that grid codes of the video space correspond to the urban entity space data source one to one.
Two association methods are described below.
The first correlation method comprises the following steps: directly associated with the external existing urban three-dimensional grid map. For the existing three-dimensional grid map of the city, data association can be realized directly through grid coding. In one step, the invention carries out three-dimensional mesh subdivision and mesh coding on a space scene in a video, and the subdivision and the mesh coding of the mesh are realized according to a GeoSOT earth subdivision mesh coding technology. Meanwhile, the existing urban three-dimensional grid map is also realized according to a GeoSOT (geosynchronous orbit) earth subdivision grid coding technology, and because the video space is consistent with a real space coordinate system and the space coding method is consistent, when the spatial positions are the same in the video space and the urban entity space, the grid codes of the video space and the urban entity space are the same, so that the grid codes of the video space and the urban entity space are in one-to-one correspondence.
And a second correlation method: for cities in which a city three-dimensional grid map is not established, the three-dimensional grid map of the video space contains space position information of grid codes, the information attribute of the grid can be regarded as 'empty', and other city information is to be further input. At this time, attribute data needs to be input into the grid in a geographic position-related mode according to a multi-source database of the city so as to perfect video grid information. For example, for a three-dimensional grid map of a residential building, the invention can endow different grids according to attributes such as road information, city underground pipe network information, ground city component and facility information, building information, correlation information in the building and the like, such as the number of floors, the number of units/floors, unit addresses and the like of the city building, and each grid corresponds to gas use data, tap water use data and the like of residents, so that each grid not only has grid codes to express the position of the grid, but also has information such as the address, the number of floors, the unit number and the like, and the information input of the grid is completed.
S4: and transforming the video space three-dimensional grid map associated with the urban entity space data source into a grid shape matched with the frame picture in the original video space to obtain a matched video space three-dimensional grid map.
It should be noted that, due to the difference of the projection angle of the field of view between the original video and the registered video, the "field of view picture" has a change in angle and expansion, which also results in a deformation of the spatial grid, as shown in fig. 9. Therefore, if the registered video space grid is converted into the original video space, the grid space information in the video can be displayed in the video playing after the registration transformation, and can also be displayed in the picture played by the original video.
As shown in fig. 4, the basic method for converting the registered video spatial grid into the original video spatial grid is as follows:
the first step is as follows: selecting the registered video registration parameters and the registered video frame picture image and video grid image from a camera registration information database;
the second step is that: and calculating the transformation relation between the video frame after registration and the original video frame by using the video registration parameters, and storing the transformation relation into a video grid coding database. Thus, the registered video and the grid are transformed into a grid shape matched with the original video frame picture so as to adapt to the video watching habit of people.
Therefore, under the condition of spatial grid deformation, the invention stores the real spatial position and the spatial size corresponding to the grids of the two grids unchanged. The case of the registered video spatial grid reverting to the original video spatial grid is shown in fig. 10.
S5: and displaying the matched video space three-dimensional stereo grid map, wherein each grid can display the associated urban entity space data source.
That is to say, through the registration of video coordinates and the grid coding of the video space, the video can be associated with the external multi-source database through the space grid coding, and in the playing process of the video, the information data contained in the video space can be displayed on the screen as required. And meanwhile, the grids can be displayed in the playing process, and when the grids are selected, the information of the selected grids can be displayed on a screen through the grid coding associated information database.
Further, the matched video space three-dimensional grid map can be queried by the user, and a video grid query page schematic diagram and a query flow are respectively shown in fig. 5 and fig. 6. Basic process of grid information query:
the first step is as follows: selecting and playing the video of the concerned area;
the second step is that: opening grids in the video, selecting/clicking the grids to be inquired, and inputting grid codes as inquiry conditions;
the third step: querying a database associated with the video by using the grid code, matching the grid code with grid information data and returning;
the fourth step: highlighting the grid, displaying the corresponding grid information in the information display column, and assigning the grid and the information with associated numbers.
In addition, the invention can also provide the information display and query functions of the thematic database. When one or more thematic grid association databases are selected, relevant grid information is displayed on a video in a list; when the information bar is selected, highlighting the corresponding grid in the video; and providing a floating window for controlling a selection switch of the thematic grid database list. FIG. 7 is a flow chart of query with a thematic grid database. As can be seen from fig. 7, the process steps are substantially the same as the "basic process for querying grid information" provided in fig. 6, except that a function of selecting an external database when the grid is associated with the external database is added in the third step. The former is to associate all external databases by default, and here to associate external databases selectively, so that the query is more accurate and focused.
Further, on the basis of completing the grid coding of the video space, all the video coverage ranges are subjected to grid coding, and each grid coding also corresponds to one or more (when a plurality of videos cover at the same position) video (camera) information, so that the grid coding can be used for reversely inquiring and retrieving the video (camera) information. The function can inquire the instant video (camera) information through the grids, and can also be used for inquiring and indexing massive historical video data and searching the video records of the past specific time period and position (grids). The specific implementation steps can be as follows:
gridding and carrying out grid coding on a field space according to a three-dimensional grid map of a video space;
cameras are provided for a field space such that each trellis code of the field space covered by the field of view of the camera corresponds to at least one camera.
Thus, by selecting a grid area and time period in the solid space, a camera or historical video that can cover the grid area can be queried and retrieved.
Further, fig. 8 shows a flow chart for querying a video (camera) in a grid. The basic flow of querying video with trellis coding includes the following steps:
the first step is as follows: a grid is selected that needs to be looked up for the presence of historical video recordings. The way of selecting the grid is various, such as an area composed of multiple grids, a path represented by the grid, a combination of multiple grid areas, and the like;
the second step is that: the time period of the historical video to be searched is selected. The search for historical video is defined from grid position and time period;
the third step: video data which accords with the grid coding position and the time interval is searched in the video grid coding database and returned to the front end for displaying and playing.
It can be seen that the present invention retrieves video information with trellis coding as an indexing condition by selecting one or more grids (which may be continuous, decentralized). If these spatial trellis codes are already associated with the video space, the corresponding video information can be retrieved. When continuous grids are selected along the road line, the distribution condition of videos (cameras) on the path can be checked, and the monitoring of the condition along the road line is served. After the video is retrieved through the grid, the grid information display and query processes of the video can be entered.
Further, with the present invention, the real-world range corresponding to each video (the field range of the camera) is gridded and trellis coded, each trellis code corresponding to the video (the camera) that covers it, as shown in fig. 11, so that video (historical video and camera) data can be retrieved through the terrestrial video grid in turn.
Therefore, when the video covering a certain region needs to be searched after the coverage areas of all videos (cameras) of a city are gridded and subjected to grid coding, the method only needs to mark a range (area or path) on a city grid map and set time parameters, and then can retrieve and play the videos (cameras) at specific time intervals and positions through the grid coding. The historical video database and the video (camera) coverage database are associated by 'grid coding' to 'camera ID', and the aim of grid video retrieval is achieved together.
In summary, the prior art cannot effectively display and explain road information, urban underground pipe network information, ground urban component and facility information, building information, and related information in a building in a video field. According to the invention, a video manager is provided with a key engine and a component to associate a video space with a real space in a space grid coding space matching mode, so that a grid space one-to-one corresponding relation is formed, external data associated with the video space is realized, an observer can extract and display information data of any concerned target through the video space in a video playing process, and the data information comprises road information, urban underground pipe network information, ground urban part and facility information, building internal association information and the like.
On the basis of completing grid coding on a video space, the invention can also search videos (namely corresponding cameras) capable of covering the grid position through the grid coding, and realize the bidirectional query retrieval of the video-grid-information.
The present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof, and it will be understood by those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. A video spatial information query method based on grid coding is characterized by comprising the following steps:
s1: registering pixel coordinates of the ground in the original video space with geographic coordinates of the ground in the solid space, and establishing a coordinate system which is the same as the solid space for the original video space;
s2: performing mesh generation and coding on all space elements in the registered video space range by using a GeoSOT (geospatial optical time) mesh coding technology to obtain a video space three-dimensional grid map;
s3: associating the video space three-dimensional grid map with the city entity space three-dimensional grid map, so that grid information of the video space and city entity space data source information are in one-to-one correspondence through grid coding;
s4: converting the video space three-dimensional grid map associated with the city entity space data source into a grid shape matched with a frame picture in an original video space to obtain a matched video space three-dimensional grid map;
s5: and displaying the matched video space three-dimensional stereo grid map, wherein each grid can display the associated urban entity space data source information.
2. The method as claimed in claim 1, wherein the urban entity spatial data source information includes road information, urban underground pipe network information, ground urban component and facility information, building information, and building association information.
3. The method as claimed in claim 1, wherein the city physical space data source information includes the number of floors, the number of cells/floors, and the address of a cell of a city building.
4. The method of claim 1, wherein the city entity space data source information is data information associated with each grid in a city entity three-dimensional grid map obtained by a GeoSOT geostationary grid coding technique.
5. The method for querying spatial information of video based on trellis coding according to claim 1, further comprising the steps of:
gridding and carrying out grid coding on a field space according to a three-dimensional grid map of a video space;
cameras are provided for a field space such that each trellis code of the field space covered by the field of view of the camera corresponds to at least one camera.
6. The method as claimed in claim 1, wherein the step S1 is implemented by registering pixel coordinates of ground in original video space with geographic coordinates of ground in ground space, specifically:
s11: selecting a plurality of groups of homonymous image feature points on a remote sensing image map and a video frame picture respectively, wherein the homonymous image feature points are all positioned on the ground and cover the maximum range of the video frame picture;
s12: and for each group of homonymous image feature points, assigning the geographic coordinates of the homonymous image feature points on the remote sensing image map to the corresponding homonymous image feature points on the video frame picture so as to realize registration.
CN202010076780.3A 2020-01-23 2020-01-23 Video space information query method based on grid coding Active CN111309967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010076780.3A CN111309967B (en) 2020-01-23 2020-01-23 Video space information query method based on grid coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010076780.3A CN111309967B (en) 2020-01-23 2020-01-23 Video space information query method based on grid coding

Publications (2)

Publication Number Publication Date
CN111309967A true CN111309967A (en) 2020-06-19
CN111309967B CN111309967B (en) 2023-12-01

Family

ID=71158144

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010076780.3A Active CN111309967B (en) 2020-01-23 2020-01-23 Video space information query method based on grid coding

Country Status (1)

Country Link
CN (1) CN111309967B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967947A (en) * 2020-09-04 2020-11-20 杭州拼便宜网络科技有限公司 Commodity display method and device, electronic equipment and storage medium
CN112132952A (en) * 2020-08-18 2020-12-25 北京旋极伏羲科技有限公司 Construction method of three-dimensional grid map based on subdivision frame
CN112184904A (en) * 2020-09-28 2021-01-05 中国石油集团工程股份有限公司 Digital integration method and device
CN112507053A (en) * 2020-12-11 2021-03-16 中国石油集团工程股份有限公司 Method for establishing visualization system and application method
CN112687007A (en) * 2020-12-22 2021-04-20 北京旋极伏羲科技有限公司 LOD technology-based stereo grid map generation method
CN112687006A (en) * 2020-12-22 2021-04-20 北京旋极伏羲科技有限公司 Rapid building three-dimensional grid data graph generation method
CN117670946A (en) * 2023-12-04 2024-03-08 北京星河大地数字科技有限公司 Video target geographic position mapping method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488736A (en) * 2013-09-18 2014-01-01 中国科学技术大学 Method and system for establishing multisource geospatial information correlation model
CN103984710A (en) * 2014-05-05 2014-08-13 深圳先进技术研究院 Video interaction inquiry method and system based on mass data
CN104331929A (en) * 2014-10-29 2015-02-04 深圳先进技术研究院 Crime scene reduction method based on video map and augmented reality
CN107180066A (en) * 2017-01-31 2017-09-19 张军民 The three-dimensional police geographical information platform and system architecture encoded based on three dimensions
CN109992636A (en) * 2019-03-22 2019-07-09 中国人民解放军战略支援部队信息工程大学 Space-time code method, temporal index and querying method and device
CN110516014A (en) * 2019-01-18 2019-11-29 南京泛在地理信息产业研究院有限公司 A method of two-dimensional map is mapped to towards urban road monitor video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103488736A (en) * 2013-09-18 2014-01-01 中国科学技术大学 Method and system for establishing multisource geospatial information correlation model
CN103984710A (en) * 2014-05-05 2014-08-13 深圳先进技术研究院 Video interaction inquiry method and system based on mass data
CN104331929A (en) * 2014-10-29 2015-02-04 深圳先进技术研究院 Crime scene reduction method based on video map and augmented reality
CN107180066A (en) * 2017-01-31 2017-09-19 张军民 The three-dimensional police geographical information platform and system architecture encoded based on three dimensions
CN110516014A (en) * 2019-01-18 2019-11-29 南京泛在地理信息产业研究院有限公司 A method of two-dimensional map is mapped to towards urban road monitor video
CN109992636A (en) * 2019-03-22 2019-07-09 中国人民解放军战略支援部队信息工程大学 Space-time code method, temporal index and querying method and device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132952A (en) * 2020-08-18 2020-12-25 北京旋极伏羲科技有限公司 Construction method of three-dimensional grid map based on subdivision frame
CN112132952B (en) * 2020-08-18 2023-09-08 北斗伏羲信息技术有限公司 Construction method of three-dimensional grid map based on subdivision frame
CN111967947A (en) * 2020-09-04 2020-11-20 杭州拼便宜网络科技有限公司 Commodity display method and device, electronic equipment and storage medium
CN112184904A (en) * 2020-09-28 2021-01-05 中国石油集团工程股份有限公司 Digital integration method and device
CN112507053A (en) * 2020-12-11 2021-03-16 中国石油集团工程股份有限公司 Method for establishing visualization system and application method
CN112507053B (en) * 2020-12-11 2024-04-26 中国石油集团工程股份有限公司 Method for establishing visual system and application method
CN112687007A (en) * 2020-12-22 2021-04-20 北京旋极伏羲科技有限公司 LOD technology-based stereo grid map generation method
CN112687006A (en) * 2020-12-22 2021-04-20 北京旋极伏羲科技有限公司 Rapid building three-dimensional grid data graph generation method
CN112687006B (en) * 2020-12-22 2023-09-08 北斗伏羲信息技术有限公司 Rapid building three-dimensional grid data graph generation method
CN112687007B (en) * 2020-12-22 2023-09-08 北斗伏羲信息技术有限公司 Stereoscopic grid chart generation method based on LOD technology
CN117670946A (en) * 2023-12-04 2024-03-08 北京星河大地数字科技有限公司 Video target geographic position mapping method and system

Also Published As

Publication number Publication date
CN111309967B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN111309967B (en) Video space information query method based on grid coding
CN110874391B (en) Data fusion and display method based on urban space three-dimensional grid model
US7813596B2 (en) System and method for creating, storing and utilizing images of a geographic location
US7944547B2 (en) Method and system of generating 3D images with airborne oblique/vertical imagery, GPS/IMU data, and LIDAR elevation data
CN100489851C (en) Method for establishing panorama electronic map service
CN101656822B (en) Apparatus and method for processing image
CN101833896B (en) Geographic information guide method and system based on augment reality
CN107067447B (en) Integrated video monitoring method for large spatial region
US20150262391A1 (en) System and method of displaying annotations on geographic object surfaces
JPH11259502A (en) Image information display device
KR101876114B1 (en) Terminal, server, system for 3d modeling and 3d modeling method using the same
CN105183154B (en) A kind of interaction display method of virtual objects and live-action image
CN105183823A (en) Interactive display system for virtual object and real image
CN110660125B (en) Three-dimensional modeling device for power distribution network system
KR100375553B1 (en) Geographic Information Service Method of Using Internet Network
CN108021766A (en) The virtual reality scenario generation method and device built in a kind of digital city
CN108287924A (en) One kind can the acquisition of positioning video data and organizing search method
CN110162585B (en) Real-time imaging three-dimensional modeling historical geographic information system
KR102028319B1 (en) Apparatus and method for providing image associated with region of interest
CN110378059A (en) A kind of village reutilization planning system
CN108427935B (en) Street view comparison image generation method and device
CN115713603A (en) Multi-type block building group form intelligent generation method based on building space map
Hong et al. The use of CCTV in the emergency response: A 3D GIS perspective
JP5553483B2 (en) Optimal oblique photograph providing method, optimum oblique photograph providing system, and optimum oblique photograph providing apparatus
CN110196638B (en) Mobile terminal augmented reality method and system based on target detection and space projection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 101-3, 4th floor, building 12, yard 3, fengxiu Middle Road, Haidian District, Beijing 100094

Applicant after: Beijing Beidou Fuxi Technology Co.,Ltd.

Address before: 100012 Room 1601, Floor 16, Building 1, Yard 19, Beiyuan East Road, Chaoyang District, Beijing

Applicant before: Beijing Xuanji Fuxi Technology Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20230825

Address after: Room 806-808, Floor 8, A1 Building, Phase I, Zhong'an Chuanggu Science Park, No. 900, Wangjiang West Road, High tech Zone, Hefei City, Anhui Province, 230000

Applicant after: Beidou Fuxi Information Technology Co.,Ltd.

Address before: 101-3, 4th floor, building 12, yard 3, fengxiu Middle Road, Haidian District, Beijing 100094

Applicant before: Beijing Beidou Fuxi Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Video Spatial Information Query Method Based on Grid Encoding

Granted publication date: 20231201

Pledgee: Anhui pilot Free Trade Zone Hefei area sub branch of Huishang Bank Co.,Ltd.

Pledgor: Beidou Fuxi Information Technology Co.,Ltd.

Registration number: Y2024980013938

PE01 Entry into force of the registration of the contract for pledge of patent right