CN106998476B - Video viewing method and device based on geographic information system - Google Patents

Video viewing method and device based on geographic information system Download PDF

Info

Publication number
CN106998476B
CN106998476B CN201710221241.2A CN201710221241A CN106998476B CN 106998476 B CN106998476 B CN 106998476B CN 201710221241 A CN201710221241 A CN 201710221241A CN 106998476 B CN106998476 B CN 106998476B
Authority
CN
China
Prior art keywords
video data
target video
target
position information
point position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710221241.2A
Other languages
Chinese (zh)
Other versions
CN106998476A (en
Inventor
岳英丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Sample Honzon Visual Technology Co ltd
Original Assignee
Nanjing Sample Honzon Visual Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Sample Honzon Visual Technology Co ltd filed Critical Nanjing Sample Honzon Visual Technology Co ltd
Priority to CN201710221241.2A priority Critical patent/CN106998476B/en
Publication of CN106998476A publication Critical patent/CN106998476A/en
Application granted granted Critical
Publication of CN106998476B publication Critical patent/CN106998476B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6377Control signals issued by the client directed to the server or network components directed to server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention provides a video viewing method and a video viewing device based on a geographic information system, wherein the method comprises the following steps: storing video data acquisition addresses and data point position information in a configuration table applied by a geographic information system in a correlated manner, wherein one piece of data point position information is matched with one video data acquisition address; receiving a video viewing request for a target data point, wherein the video viewing request comprises target data point position information and target video timestamp description information; determining a target video data acquisition address according to the configuration table and the position information of the target data point; and determining a target video data segment according to the target video data acquisition address and the target video timestamp description information. The invention also discloses a corresponding video viewing device.

Description

Video viewing method and device based on geographic information system
Technical Field
The invention relates to the technical field of image processing, in particular to a video viewing method and device based on a geographic information system.
Background
Geographic Information Systems (GIS) are cross products of various disciplines, provide various spatial and dynamic Geographic Information in real time by adopting a Geographic model analysis method on the basis of Geographic space, and are a computer technology system for providing Geographic research and Geographic decision services. Its basic function is to convert data such as coordinates (typically from a database, spreadsheet file, or directly entered in a program) into data points in a geographical graph for display, and to browse, manipulate, and analyze the displayed results. However, the data points displayed in the GIS cannot display the video files corresponding to the data points, and further cannot view the video data of the data points at different time periods.
Disclosure of Invention
In view of the above, the present invention is directed to a video viewing method and apparatus based on a geographic information system, which is intended to solve or at least alleviate the above-mentioned problems.
In a first aspect, an embodiment of the present invention provides a video viewing method based on a geographic information system, where the method includes:
storing video data acquisition addresses and data point position information in a configuration table applied by a geographic information system in a correlated manner, wherein one piece of data point position information is matched with one video data acquisition address;
receiving a video viewing request for a target data point, wherein the video viewing request comprises target data point position information and target video timestamp description information;
determining a target video data acquisition address according to the configuration table and the position information of the target data point;
and determining a target video data segment according to the target video data acquisition address and the target video timestamp description information.
Optionally, the determining a target video data acquisition address according to the configuration table and the target data point position information includes:
traversing each data point position information in the configuration table, determining data point position information consistent with the target data point position information, and taking a video data acquisition address corresponding to the data point position information as a target video data acquisition address.
Optionally, the determining a target video data segment according to the target video data obtaining address and the target video timestamp description information includes:
acquiring target video data according to the target video data acquisition address;
and determining a target video data segment from the target video data according to the target video timestamp description information.
Optionally, the determining a target video data segment from the target video data according to the target video timestamp description information includes:
dividing the target video data into a plurality of video data segments according to a specified time slice;
establishing an index file according to the video data fragments, wherein the index file comprises video data fragment timestamp description information and a corresponding relation with the video data fragments;
and determining the target video data segment according to the index file and the target video timestamp description information.
Optionally, the determining a target video data segment according to the target video data obtaining address and the target video timestamp description information includes:
generating a control instruction according to the target video data acquisition address and the target video timestamp description information;
and transmitting the control instruction to a server, wherein the control instruction is used for controlling the server to determine a target video data segment according to the target video data acquisition address and the target video timestamp description information.
In a second aspect, an embodiment of the present invention provides a video viewing apparatus based on a geographic information system, the apparatus including:
the storage unit is used for storing video data acquisition addresses and data point position information in a configuration table of a geographic information system application GIS in a correlated manner, wherein one data point position information is matched with one video data acquisition address;
the receiving unit is used for receiving a video viewing request for a target data point, wherein the video viewing request comprises target data point position information and target video timestamp description information;
the first processing unit is used for determining a target video data acquisition address according to the configuration table and the position information of the target data point;
and the second processing unit is used for determining a target video data segment according to the target video data acquisition address and the target video timestamp description information.
Optionally, the first processing unit is further configured to:
traversing each data point position information in the configuration table, determining data point position information consistent with the target data point position information, and taking a video data acquisition address corresponding to the data point position information as a target video data acquisition address.
Optionally, the second processing unit is further configured to:
acquiring target video data according to the target video data acquisition address;
and determining a target video data segment from the target video data according to the target video timestamp description information.
Optionally, the second processing unit is further configured to:
dividing the target video data into a plurality of video data segments according to a specified time slice;
establishing an index file according to the video data fragments, wherein the index file comprises video data fragment timestamp description information and a corresponding relation with the video data fragments;
and determining the target video data segment according to the index file and the target video timestamp description information.
Optionally, the second processing unit is further configured to:
generating a control instruction according to the target video data acquisition address and the target video timestamp description information;
and transmitting the control instruction to a server, wherein the control instruction is used for controlling the server to determine a target video data segment according to the target video data acquisition address and the target video timestamp description information.
According to the technical scheme of the invention, the videos of the target data points are selected for viewing through the application of the geographic information system, and further, the videos of the corresponding time period can be selected for viewing, so that the video viewing becomes convenient and fast, and meanwhile, the user experience is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flow chart illustrating a video viewing method according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating a target video data segment determining method according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating another method for determining a target video data segment according to an embodiment of the present invention;
fig. 4 shows a schematic structural diagram of a video viewing apparatus provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a video viewing method according to an embodiment of the present invention. As shown in fig. 1, the method is performed in a computing device, and the method begins at step S110.
In step S110, the video data acquisition address and the data point location information are stored in association with each other in the configuration table of the geographic information system application. Wherein one of the data point position information matches one of the video data acquisition addresses.
In the embodiment of the application, a unique number may be set for the data to be stored in a configuration table of a geographic information system application (GIS), and the data to be stored is sequentially stored according to the number sequence, for example, each number, each data point position information, and each video data acquisition address are in one-to-one correspondence. The data in the configuration table may be imported via a database or an EXCEL table, or may be added manually, or the data stored in the configuration table may be edited or modified manually. When the data in the configuration table is the externally imported data, the data are sequentially stored in the configuration table according to the sequence in the database. Before the point position information of each data point is imported into the configuration table, the point position information needs to be processed according to a coordinate system preset in a configuration GIS, and each data point can be a building, a traffic intersection, a road sign, a sign board and the like in the GIS. Each video data may be video data shot by a camera arranged at a corresponding data point, and the video data may be stored in a computing device executing the video viewing method of the present invention, or may be stored in a server or a cloud, and may be acquired by a video data acquisition address, which is not limited to the present invention.
For example, when the map in the GIS is a beijing traffic map and the data points are traffic intersections, each data point stores an acquisition address of video data shot by a camera arranged at the current intersection, and the video acquired by the acquisition address is generally a video of the current intersection for a whole day.
In step S120, a video viewing request for a target data point is received, where the video viewing request includes target data point location information and target video timestamp description information.
In the embodiment of the present application, each data point corresponds to a corresponding building on a map, for example, one-to-one, and the target video timestamp description information includes a target video data segment start point and a target video data segment end point, which is not limited in the present invention.
In step S130, a target video data acquisition address is determined according to the configuration table and the target data point position information.
Optionally, step S130 specifically includes: traversing each data point position information in the configuration table, determining data point position information consistent with the target data point position information, and taking a video data acquisition address corresponding to the data point position information as a target video data acquisition address;
in step S140, a target video data segment is determined according to the target video data acquisition address and the target video timestamp description information.
Determining the target video data segment may be determined in any of the following ways.
Fig. 2 is a flowchart of a method for determining a target video data segment according to an embodiment of the present invention. As shown in fig. 2, the method begins at step S210.
In step S210, target video data is acquired according to the target video data acquisition address.
In step S220, a target video data segment is determined from the target video data according to the target video timestamp description information.
This method can be implemented for relatively small video data.
For example, taking a traffic intersection map in beijing city as an example, when the target data point is a south entrance of a suzhou street intersection, the coordinate information of the data point consistent with the target data point is searched in the configuration table according to the coordinate information of the entrance, and after the data point position information consistent with the position information of the south entrance of the suzhou street intersection is found, the video data acquisition address corresponding to the data point position information is used as the target video acquisition address. And acquiring video data corresponding to the data point according to the target video acquisition address, and determining a target video data segment from the target video data according to the target video timestamp description information.
Optionally, step S220 specifically includes:
dividing the target video data into a plurality of video data segments according to a specified time slice;
establishing an index file according to the video data fragments, wherein the index file comprises video data fragment timestamp description information and a corresponding relation with the video data fragments;
and determining the target video data segment according to the index file and the target video timestamp description information.
The index file may be in a format of m3u8 commonly used in the industry, or may be in a more private index file format according to the setting of the user, which is not a limitation of the present invention.
For example, after target video data (the total duration of the video data is 24 hours) is acquired, the video data is divided into 24 video data segments in a prescribed time slice, for example, 1 hour. And establishing an index file according to the 24 video data fragments, wherein the index file records the corresponding relation between each video data fragment and the timestamp description information of the current video data fragment. And searching the index file according to the target video timestamp description information, so that the target video data segment of the target data point can be determined.
Fig. 3 is a flowchart of another method for determining a target video data segment according to an embodiment of the present invention. As shown in fig. 3, the method begins at step S310.
In step S310, a control command is generated according to the target video data acquisition address and the target video timestamp description information.
In step S320, the control instruction is transmitted to a server, where the control instruction is used to control the server to determine a target video data segment according to the target video data obtaining address and the target video timestamp description information.
This method can be implemented for relatively large video data.
After a control instruction carrying a target video data acquisition address and target video timestamp description information is generated. And transmitting the control instruction to a server. After receiving the control instruction, the server obtains corresponding target video data according to the target video data obtaining address in the control instruction, and divides the target video data into a plurality of video data segments according to a specified time slice, wherein the video data segment division process is the same as the division step in the step S220, and will not be described too much here.
And the server establishes an index file according to the video data fragments, wherein the index file comprises the video data fragment timestamp description information and the corresponding relation with the video data fragments. And determining the target video data segment according to the index file and the target video timestamp description information. The server transmits the target video data segment to the computing equipment executing the video viewing method.
In addition, the server and the computing device such as a PC executing the video viewing method of the present invention establish a video push channel, and push the target video data segment to a player or a WEB client browser in the computing device via the video push channel. The browser can be an IE9 browser, and the video decoder can be H.264; alternatively, the browser is an Opera or Chrome browser and the video decoder is VP 8.
It should be noted that, after the video data is divided into time slices, the format of the target video data segment may be the same as the format of the target video data; of course, format conversion can be performed according to different requirements. In the present invention, for different client browsers, the format of the target video data that can be adopted in the scheme of the present invention corresponds to the client browser, and with the development of the client browser technology, the format of the video data that can be supported by the client browser also increases accordingly, so the above video data format is not limited by the scheme of the present invention, and is only for illustration.
According to the technical scheme of the invention, the videos of the target data points are selected for viewing through the application of the geographic information system, and further, the videos of the corresponding time period can be selected for viewing, so that the video viewing becomes convenient and fast, and meanwhile, the user experience is improved.
Fig. 4 is a schematic structural diagram of a video viewing apparatus according to an embodiment of the present invention. As shown in fig. 4, the apparatus resides in a computing device or server, comprising: a storage unit 410, a receiving unit 420, a first processing unit 430 and a second processing unit 440.
The storage unit 410 is configured to store video data acquisition addresses and data point position information in association with a configuration table of a geographic information system application GIS, where one of the data point position information matches one of the video data acquisition addresses.
The receiving unit 420 is configured to receive a video viewing request for a target data point, where the video viewing request includes target data point location information and target video timestamp description information.
The first processing unit 430 is configured to determine a target video data acquisition address according to the configuration table and the position information of the target data point.
Optionally, the first processing unit 430 is further configured to:
traversing each data point position information in the configuration table, determining data point position information consistent with the target data point position information, and taking a video data acquisition address corresponding to the data point position information as a target video data acquisition address.
The second processing unit 440 is configured to determine a target video data segment according to the target video data obtaining address and the target video timestamp description information.
Optionally, the second processing unit 440 is further configured to:
acquiring target video data according to the target video data acquisition address;
and determining a target video data segment from the target video data according to the target video timestamp description information.
Optionally, the second processing unit 440 is further configured to:
dividing the target video data into a plurality of video data segments according to a specified time slice;
establishing an index file according to the video data fragments, wherein the index file comprises video data fragment timestamp description information and a corresponding relation with the video data fragments;
and determining the target video data segment according to the index file and the target video timestamp description information.
Optionally, the second processing unit 440 is further configured to:
generating a control instruction according to the target video data acquisition address and the target video timestamp description information;
and transmitting the control instruction to a server, wherein the control instruction is used for controlling the server to determine a target video data segment according to the target video data acquisition address and the target video timestamp description information.
The video viewing device provided by the embodiment of the invention can be specific hardware on equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A video viewing method based on a geographic information system is characterized by comprising the following steps:
storing video data acquisition addresses and data point position information in a configuration table applied to a geographic information system in a correlated manner, wherein one piece of data point position information is matched with one video data acquisition address, and the data point position information is building position information;
receiving a video viewing request for a target data point, wherein the video viewing request comprises target data point position information and target video timestamp description information; the target video timestamp description information comprises a target video data segment starting point and a target video data segment ending point;
determining a target video data acquisition address according to the configuration table and the position information of the target data point;
acquiring target video data according to the target video data acquisition address, and dividing the target video data into a plurality of video data segments according to a specified time slice;
establishing an index file according to the video data fragments, wherein the index file comprises the corresponding relation between the video data fragment timestamp description information and the video data fragments;
and determining the target video data segment according to the index file and the target video timestamp description information.
2. The method of claim 1, wherein the determining a target video data acquisition address according to the configuration table and the target data point position information comprises:
traversing each data point position information in the configuration table, determining data point position information consistent with the target data point position information, and taking a video data acquisition address corresponding to the data point position information as a target video data acquisition address.
3. The method of claim 1, wherein determining a target video data segment based on the target video data capture address and the target video timestamp description information comprises:
generating a control instruction according to the target video data acquisition address and the target video timestamp description information;
and transmitting the control instruction to a server, wherein the control instruction is used for controlling the server to determine a target video data segment according to the target video data acquisition address and the target video timestamp description information.
4. A video viewing apparatus based on a geographic information system, the apparatus comprising:
the storage unit is used for storing video data acquisition addresses and data point position information in a configuration table of a geographic information system application GIS in a correlated manner, wherein one data point position information is matched with one video data acquisition address, and the data point position information is building position information;
the receiving unit is used for receiving a video viewing request for a target data point, wherein the video viewing request comprises target data point position information and target video timestamp description information; the target video timestamp description information comprises a target video data segment starting point and a target video data segment ending point;
the first processing unit is used for determining a target video data acquisition address according to the configuration table and the position information of the target data point;
and the second processing unit is used for acquiring target video data according to the target video data acquisition address, dividing the target video data into a plurality of video data fragments according to a specified time slice, establishing an index file according to the plurality of video data fragments, wherein the index file comprises the corresponding relation between video data fragment timestamp description information and the video data fragments, and determining the target video data fragments according to the index file and the target video timestamp description information.
5. The apparatus as recited in claim 4, said first processing unit to further:
traversing each data point position information in the configuration table, determining data point position information consistent with the target data point position information, and taking a video data acquisition address corresponding to the data point position information as a target video data acquisition address.
6. The apparatus as recited in claim 4, said second processing unit to further:
generating a control instruction according to the target video data acquisition address and the target video timestamp description information;
and transmitting the control instruction to a server, wherein the control instruction is used for controlling the server to determine a target video data segment according to the target video data acquisition address and the target video timestamp description information.
CN201710221241.2A 2017-04-06 2017-04-06 Video viewing method and device based on geographic information system Active CN106998476B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710221241.2A CN106998476B (en) 2017-04-06 2017-04-06 Video viewing method and device based on geographic information system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710221241.2A CN106998476B (en) 2017-04-06 2017-04-06 Video viewing method and device based on geographic information system

Publications (2)

Publication Number Publication Date
CN106998476A CN106998476A (en) 2017-08-01
CN106998476B true CN106998476B (en) 2020-06-30

Family

ID=59435147

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710221241.2A Active CN106998476B (en) 2017-04-06 2017-04-06 Video viewing method and device based on geographic information system

Country Status (1)

Country Link
CN (1) CN106998476B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929091A (en) * 2019-11-18 2020-03-27 江苏燕宁工程科技集团有限公司 Query method and system for road operation and maintenance video inspection
CN111161055A (en) * 2020-03-05 2020-05-15 中国邮政储蓄银行股份有限公司 Data processing method and system
CN113821685A (en) * 2021-11-23 2021-12-21 北京亮亮视野科技有限公司 Data processing method and device for Internet of things equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101122466A (en) * 2006-08-09 2008-02-13 高德软件有限公司 Electronic map track and its video data memory and inquiring method
CN101272397A (en) * 2008-05-05 2008-09-24 南京师范大学 Method for acquiring addressable stream media based on ASF data amalgamation technology
CN101547360A (en) * 2009-05-08 2009-09-30 南京师范大学 Localizable video file format and method for collecting data of formatted file
CN101576926A (en) * 2009-06-04 2009-11-11 浙江大学 Monitor video searching method based on geographic information system
CN102289520A (en) * 2011-09-15 2011-12-21 山西四和交通工程有限责任公司 Traffic video retrieval system and realization method thereof
US8593485B1 (en) * 2009-04-28 2013-11-26 Google Inc. Automatic video and dense image-based geographic information matching and browsing
CN103984710A (en) * 2014-05-05 2014-08-13 深圳先进技术研究院 Video interaction inquiry method and system based on mass data
CN104731856A (en) * 2015-01-09 2015-06-24 杭州好好开车科技有限公司 Dynamic real-time traffic status video querying method and device
CN105120321A (en) * 2015-08-21 2015-12-02 北京佳讯飞鸿电气股份有限公司 Video searching method, video storage method and related devices

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101122466A (en) * 2006-08-09 2008-02-13 高德软件有限公司 Electronic map track and its video data memory and inquiring method
CN101272397A (en) * 2008-05-05 2008-09-24 南京师范大学 Method for acquiring addressable stream media based on ASF data amalgamation technology
US8593485B1 (en) * 2009-04-28 2013-11-26 Google Inc. Automatic video and dense image-based geographic information matching and browsing
CN101547360A (en) * 2009-05-08 2009-09-30 南京师范大学 Localizable video file format and method for collecting data of formatted file
CN101576926A (en) * 2009-06-04 2009-11-11 浙江大学 Monitor video searching method based on geographic information system
CN102289520A (en) * 2011-09-15 2011-12-21 山西四和交通工程有限责任公司 Traffic video retrieval system and realization method thereof
CN103984710A (en) * 2014-05-05 2014-08-13 深圳先进技术研究院 Video interaction inquiry method and system based on mass data
CN104731856A (en) * 2015-01-09 2015-06-24 杭州好好开车科技有限公司 Dynamic real-time traffic status video querying method and device
CN105120321A (en) * 2015-08-21 2015-12-02 北京佳讯飞鸿电气股份有限公司 Video searching method, video storage method and related devices

Also Published As

Publication number Publication date
CN106998476A (en) 2017-08-01

Similar Documents

Publication Publication Date Title
EP3242225B1 (en) Method and apparatus for determining region of image to be superimposed, superimposing image and displaying image
KR101759415B1 (en) Real world analytics visualization
CN106998476B (en) Video viewing method and device based on geographic information system
CN107527186B (en) Electronic reading management method and device and terminal equipment
KR102361112B1 (en) Extracting similar group elements
CN105630792B (en) Information display and push method and device
CN112487883B (en) Intelligent pen writing behavior feature analysis method and device and electronic equipment
US20130212230A1 (en) Mobile terminal, data distribution server, data distribution system, and data distribution method
CN106611065B (en) Searching method and device
CN108196902B (en) Method and apparatus for displaying open screen advertisements
CN112487871A (en) Handwriting data processing method and device and electronic equipment
CN109117448B (en) Thermodynamic diagram generation method and device
CN110389981B (en) Data display method, device, electronic equipment and computer readable storage medium
CN106462628B (en) System and method for automatically pushing location-specific content to a user
CN112486337B (en) Handwriting graph analysis method and device and electronic equipment
CN111641690B (en) Session message processing method and device and electronic equipment
KR101913567B1 (en) Operating method of web server, screen shot server, web browser and target terminal for sharing digital contents
CN113139082A (en) Multimedia content processing method, apparatus, device and medium
CN112487876A (en) Intelligent pen character recognition method and device and electronic equipment
CN105610596B (en) Resource directory management method and network terminal
US10108882B1 (en) Method to post and access information onto a map through pictures
CN112487897B (en) Handwriting content evaluation method and device and electronic equipment
CN113360794A (en) Scenic spot recommendation method for travel app and related products
CN104572620B (en) A kind of method and apparatus for showing chapters and sections content
CN113973235A (en) Interactive information display method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant