CN105812660A - Video processing method based on geographic position - Google Patents
Video processing method based on geographic position Download PDFInfo
- Publication number
- CN105812660A CN105812660A CN201610147581.0A CN201610147581A CN105812660A CN 105812660 A CN105812660 A CN 105812660A CN 201610147581 A CN201610147581 A CN 201610147581A CN 105812660 A CN105812660 A CN 105812660A
- Authority
- CN
- China
- Prior art keywords
- video
- additional information
- video data
- geographical position
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/647—Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00249—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a photographic apparatus, e.g. a photographic printer or a projector
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Library & Information Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention relates to a video processing method based on a geographic position. The method is applied to a mobile photographing terminal. The mobile photographing terminal comprises a camera. The method comprises following steps of invoking the camera to photograph, thus obtaining video data; obtaining current to-be-sent video frames according to the video data; obtaining additional information corresponding to the current video frames, wherein the additional information at least comprises current positioning information of the mobile photographing terminal; adding the additional information to the data packets of the current video frames; and sending the data packets of the current video frames added with the additional information to a cloud server or a remote client, thus enabling the cloud server or the remote client to process the video frames according to the positioning information in the received data packets. According to the method, each video frame comprises the positioning information and also comprises the other additional information; the video data can be searched based on the additional information and the geographic position, thus providing rich video applications on this basis.
Description
Technical field
The present invention relates to video processing technique, particularly relate to a kind of video processing technique based on geographical position.
Background technology
Development along with the network communications technology and network infrastructure so that provide various content of multimedia to be possibly realized on the internet.On the other hand, along with the development of mobile electronic terminal such as smart mobile phone, the video of the various users shooting uploading magnanimity on network.The information that these videographs are abundant, and the title arranged when existing video frequency search system is typically all based on video uploader uploaded videos or what label realized, information abundant in video can not be retrieved utilization.
Summary of the invention
In view of this, it is necessary to provide a kind of method for processing video frequency and system, its information that can solve to comprise in video in prior art cannot be retrieved the problem utilized.
A kind of method for processing video frequency based on geographical position, is applied in follow shot terminal, and described follow shot terminal includes photographic head, and described method includes:
Call described photographic head to shoot to obtain video data;
Current video frame to be sent is obtained according to described video data;
Obtaining the additional information corresponding with this current video frame, described additional information at least includes the location information that described follow shot terminal is current;
Described additional information is added in the packet of described current video frame;And
The packet being added with the current video frame of additional information is sent to the client of cloud server or far-end, so that frame of video is processed by the client of described cloud server or far-end according to the location information in the packet received.
In one embodiment, above-mentioned method also includes: detect the anglec of rotation of described photographic head in real time when shooting;Described additional information also includes the anglec of rotation of described photographic head.
In one embodiment, above-mentioned method also includes: obtain the user identification information of the photographer corresponding with described current video frame;Described additional information also includes described user identification information.
In one embodiment, above-mentioned method also includes: obtain the shooting time of described current video frame in real time when shooting;Described additional information also includes described shooting time.
In one embodiment, above-mentioned method also includes: obtain the instruction of user's input;Described additional information also includes described shooting time.
In one embodiment, above-mentioned method also includes: receive the label with user's input and/or character introduction;Described additional information also includes described label and/or character introduction.
In one embodiment, above-mentioned method also includes: at least part of content of described additional information is encrypted, and described additional information at least includes the content through encryption.
In one embodiment, above-mentioned method also includes: calculate corresponding check information according to the video data of described present frame;Described in the content of encryption, include described check information.
A kind of method for processing video frequency based on geographical position, is applied in cloud server system, including:
Receive the video data bag that follow shot terminal sends;
Parsing the additional information of video data and correspondence from described video data bag, described additional information at least includes the location information that described follow shot terminal is current;
Store described video data and obtain the index of correspondence;
Described additional information is associated storage with described index;And
Based on described additional information retrieval the video data obtaining correspondence, and provide video data services based on the video data retrieved to client.
In one embodiment, above-mentioned method also includes: parse the instruction corresponding with each frame of video from described video data bag;And perform the data handling procedure corresponding with described instruction.
In one embodiment, described data handling procedure includes:
Intercept the video clips of predetermined length or designated length, shared to described follow shot terminal bind mutually from media platform or social network-i i-platform;Or
Intercept video segment one section relevant to send to the server of break in traffic rules and regulations processing mechanism;Or
Automatically intercept video segment one section relevant to send to the Alarm Server of warning accepting institution;Or
Automatically intercept video segment one section relevant to send to the server of emergency authorities.
In one embodiment, above-mentioned method also includes: parse label and/or the character introduction of video from described video data bag;
Described label and/or character introduction are stored;
Key word is extracted from described label and/or character introduction;
Based on geographical position, described key word is carried out statistical analysis;
Obtain relevant video data when the statistical value of certain key word in certain geographical position exceedes predetermined threshold value, and in a content delivering system, issue the relevant video data of described acquisition.
According to above-mentioned technical scheme, include location information in each frame of video of video data, may also include other additional information, based on these additional informations, can so that video data is capable of the retrieval based on geographical position, thus abundant Video Applications can be provided on this basis.
For the above and other purpose of the present invention, feature and advantage can be become apparent, preferred embodiment cited below particularly, and coordinate institute's accompanying drawings, it is described in detail below.
Accompanying drawing explanation
The configuration diagram of the video information process system based on geographical position that Fig. 1 provides for the embodiment of the present invention.
Fig. 2 is the structured flowchart of the follow shot terminal of the video information process system of Fig. 1.
Fig. 3 is the data structure schematic diagram of the video data that the follow shot terminal of Fig. 2 is uploaded.
The video information process system that Fig. 4 is Fig. 1 processes the schematic flow sheet of video data that follow shot terminal is uploaded.
The interface schematic diagram of the application that the video information process system that Fig. 5 is Fig. 1 provides based on the video data with geographical position.
The interface schematic diagram of the Another Application that the video information process system that Fig. 6 is Fig. 1 provides based on the video data with geographical position.
The video information process system that Fig. 7 is Fig. 1 provides the schematic flow sheet of panorama preview function based on the video data with geographical position.
The video information process system that Fig. 8 is Fig. 1 provides the schematic flow sheet of video search service based on the video data with geographical position.
Fig. 9 is the extension schematic diagram of the video information process system of Fig. 1.
The video information process system that Figure 10 is Fig. 1 provides the schematic flow sheet of Video Applications based on the video data with geographical position, label and/or character introduction.
Detailed description of the invention
For further setting forth that the present invention realizes technological means and effect that predetermined goal of the invention is taked, below in conjunction with accompanying drawing and preferred embodiment, to according to the specific embodiment of the present invention, structure, feature and effect thereof, describe in detail as after.
Consult Fig. 1, the configuration diagram of its video information process system based on geographical position provided for first embodiment of the invention.As it is shown in figure 1, video information process system 100 comprises the steps that follow shot terminal 10, cloud server system 20 and client 30.
Follow shot terminal 10, specifically, can be the mobile electronic terminal such as mobile phone, panel computer, unmanned plane etc. arbitrarily with photographic head.Consulting Fig. 2, it is the structural representation of follow shot terminal 10.Follow shot terminal 10 includes memorizer 102, storage control 104, one or more (only illustrating one in figure) processor 106, Peripheral Interface 108, mixed-media network modules mixed-media 110, voicefrequency circuit 111, GPS (GlobalPositioningSystem, global positioning system) module 112, sensor 114, photographing module 116 and power module 122.These assemblies pass through the mutual communication of one or more communication bus/holding wire.
It will appreciated by the skilled person that the structure shown in Fig. 2 is only signal, the structure of follow shot terminal 10 is not caused restriction by it.Such as, follow shot terminal 10 may also include the assembly more or more less than shown in Fig. 2, or has the configuration different from shown in Fig. 2.
Memorizer 102 can be used for storing software program and module, each method in the embodiment of the present invention and programmed instruction/module corresponding to device, processor 106 is by running the software program and module being stored in memorizer 102, thus performing the application of various function and data process.
Memorizer 102 can include high speed random access memory, may also include nonvolatile memory, such as one or more magnetic storage device, flash memory or other non-volatile solid state memories.In some instances, memorizer 102 can farther include the memorizer remotely located relative to processor 106, and these remote memories can be connected to above-mentioned server by network.The example of above-mentioned network includes but not limited to the Internet, intranet, LAN, mobile radio communication and combination thereof.The access of memorizer 102 can be carried out by processor 106 and other possible assemblies under the control of storage control 104.
Various input/output devices are coupled to processor 106 by Peripheral Interface 108.Various softwares, the above-mentioned server of instruction in processor 106 run memory 102 perform various functions and carry out data process.In certain embodiments, Peripheral Interface 108, processor 106 and storage control 104 can realize in one single chip.In some other example, they can be realized by independent chip respectively.
Mixed-media network modules mixed-media 110 is used for receiving and sending network signal.Above-mentioned network signal can include wireless signal.In one embodiment, mixed-media network modules mixed-media 110 essence is radio-frequency module, receives and sends electromagnetic wave, it is achieved electromagnetic wave is changed with the mutual of the signal of telecommunication, thus carrying out communication with communication network or other equipment.Radio-frequency module can include the various existing component for performing these functions, for instance, antenna, RF transceiver, digital signal processor, encryption/deciphering chip, subscriber identity module (SIM) card, memorizer etc..Radio-frequency module can carry out communication with various networks such as the Internet, intranet, wireless network or carry out communication by wireless network and other equipment.Above-mentioned wireless network can include cellular telephone networks, WLAN or Metropolitan Area Network (MAN).nullAbove-mentioned wireless network can use various communication standard、Agreement and technology,Include, but are not limited to global system for mobile communications (GlobalSystemforMobileCommunication,GSM)、Enhancement mode mobile communication technology (EnhancedDataGSMEnvironment,EDGE),Wideband CDMA Technology (widebandcodedivisionmultipleaccess,W-CDMA),CDMA (Codedivisionaccess,CDMA)、Tdma (TimeDivisionMultipleAccess,TDMA),Adopting wireless fidelity technology (WirelessFidelity,WiFi) (such as IEEE-USA standard IEEE 802.11a,IEEE802.11b,IEEE802.11g and/or IEEE802.11n)、The networking telephone (VoiceOverInternetProtocol,VoIP)、Worldwide interoperability for microwave accesses (WorldwideInteroperabilityforMicrowaveAccess,Wi-Max)、Other are used for mail、The agreement of instant messaging and short message,And any other suitable communications protocol,Even can include those agreements being currently developed not yet.
Voicefrequency circuit 111 and the interface that follow shot terminal 10 recording is provided.Specifically, voicefrequency circuit 111 receives the signal of telecommunication from mike, converts electrical signals to voice data, and by data transmission in network telephony to processor 102 to be further processed.
GPS module 112 is for receiving the framing signal that gps satellite is reported, and calculates the position of self according to framing signal.Above-mentioned position such as can represent with longitude, latitude and height above sea level.It is appreciated that, it is achieved the mode of location is not limited to GPS system.Such as, other available global position systems also include big-dipper satellite alignment system (CompassNavigationSatelliteSystem, or glonass system (GlobalNavigationSatelliteSystem, GLONASS) CNSS).Additionally, location is also not limited to adopt satellite positioning tech, for instance, also can adopt wireless location technology, for instance based on the location technology of the location technology of wireless base station or WIFI.Now, GPS module 112 can be replaced by corresponding module, or directly perform specific finder via processor 102 and realize.
The example of sensor 114 includes, but are not limited to: optical sensor, attitude transducer and other sensors.Wherein, ambient light sensor can the light and shade of sense ambient light, and then shooting can be adjusted.Attitude transducer such as can include acceleration transducer, gravitometer, gyroscope etc., and it can detect the spatial attitude such as anglec of rotation etc. in all directions of follow shot terminal 10.It is appreciated that the anglec of rotation in all directions of follow shot terminal 10 both correspond to shooting direction.Other sensors can include barometer, drimeter, thermometer etc..
Photographing module 116 is used for shooting photo or video.Photo or the video of shooting can store to memorizer 104, and can be sent by mixed-media network modules mixed-media 110.Photographing module 116 specifically can include the assemblies such as camera lens module, CIS and flash lamp.Camera lens module is used for the target imaging being taken, and imaging is mapped in CIS.CIS is for receiving the light from camera lens module, it is achieved photosensitive, to record image information.Specifically, CIS can based on complementary metal oxide semiconductors (CMOS) (ComplementaryMetalOxideSemiconductor, CMOS), charge coupled cell (Charge-coupledDevice, CCD) or other image sensing principles realize.Flash lamp is for being exposed compensating when shooting.In general, can be light-emittingdiode (LightEmittingDiode, LED) flash lamp for the flash lamp of follow shot terminal 10.
Power module 122 is for providing supply of electric power to processor 102 and other each assemblies.Specifically, power module 122 can include power-supply management system, one or more power supply (such as battery or alternating current), charging circuit, power-fail testing circuit, inverter, indicator of the power supply status and other arbitrarily relevant to the generation of electric power in follow shot terminal 10, management and distribution assemblies.
Memorizer 104 internal memory contains software and program module can include operating system 130 and operate in the application program in operating system 130.Operating system 130 its can include the various component software for managing system task (such as memory management, storage device control, power management etc.) and/or driving, and can communication mutual to various hardware or component software, thus providing the running environment of other component softwares.Described application program comprises the steps that taking module 131, additional information add module 132, video data package module 133 and data transmission blocks 134.
Wherein, taking module 131 is used for calling described photographing module 116 and shoots to obtain video data;Additional information acquisition module 132 is for obtaining the additional information corresponding with this current video frame and described additional information being added in described current video frame;Video data package module 133 is packed for the data that one or more is added with the frame of video of additional information;Data transmission blocks 134 is for being sent to cloud server system 20 by the video data after packing, so that described cloud server system 20 provides the various information services provided based on this additional information according to the additional information in the video data received.
As shown in Figure 3, in same video data bag, can including multiple frame of video, and include additional information and the video data of this frame of video in each frame of video simultaneously, video data can adopt arbitrary form (such as H.264 or MPEG4 etc. store).
Additional information can include two classes, and a class is editable additional information, and user can pass through specific application and realize the amendment to this type of information, increases newly or delete, and editable additional information may generally serve to the information of storage user's input;Another kind of is not editable additional information, and once write frame of video, it cannot be edited by user again, and not editable additional information is generally available the status information storing acquisition in real time.
In a specific embodiment, above-mentioned editable additional information comprises the steps that the information such as label, the character introduction that user inputs.
In a specific embodiment, above-mentioned editable additional information comprises the steps that the code of the instruction that user inputs.The instruction of user's input can include sharing, report etc..
In a specific embodiment, above-mentioned not editable additional information comprises the steps that location information, for instance by warp, latitude and height that GPS module 112 gets.
In a specific embodiment, above-mentioned not editable additional information comprises the steps that the attitude information of described follow shot terminal 10, for instance, follow shot terminal 10 or the photographing module 116 anglec of rotation in all directions.The attitude information of follow shot terminal 10 can pass through sensor 114 and obtain.
In a specific embodiment, above-mentioned not editable additional information comprises the steps that the shooting time of described current video frame.
In a specific embodiment, the user identification information of above-mentioned not editable additional information comprises the steps that video capture person.User identification information herein can be such as user's account number in a network account system, or other can determine the information of user account number uniquely in a network account system.At synchronization, the user of video capture terminal 20, namely video capture person can be defined to an only people.This user can be and the user account number of follow shot terminal 10 binding, or be authorized to use the user account number of follow shot terminal 10.
In a specific embodiment, above-mentioned not editable additional information comprises the steps that the check information of the video data of described current video frame.Described check information is such as adopt hash algorithm to calculate according to described video data to obtain, it is possible to be used for verifying whether described video data is modified.So no matter how this frame of video replicates, transmits, and all can verify whether video data is modified based on this check information, so that the verity of video data can further confirm that, this provides technical guarantee as judicial evidence to video.
For editable additional information, it can be only written partial video frame, such as, for the multiple frame of video produced in a second (can also be other times length), editable additional information can be only written in a fixing frame of video (such as the first frame).This has the key video sequence frame that the frame of video with editable additional information can be defined as in this time.Adopt in this way, both directly can write editable additional information in frame of video, it is also possible to farthest reduce the memory space that editable additional information occupies.
For not editable additional information, it is typically all acquisition in real time, therefore, it can all write in every frame.However it is not limited to this mode, still can be in partial video frame, only write not editable additional information.Such as, each second writes not editable additional information in a frame of video.
Additionally, in order to prevent not editable additional information destroyed or distort, not editable additional information writes frame of video after rivest, shamir, adelman can be adopted to be encrypted.Such as, identical PKI can be stored in each video capture terminal 10, utilize this PKI that not editable additional information is encrypted.And the private key corresponding with this PKI only has in cloud server system 20 just to have, say, that only cloud server 10 solution can read and write the additional information after the encryption in frame of video.
As it has been described above, in the video information process system of the present embodiment, include video data and above-mentioned additional information in the video data that follow shot terminal 10 is uploaded.
And as it is shown in figure 1, cloud server system 2020 can include video processing service device 21, data base 22, distributed file storage system 23 and application server 24.
Wherein, video processing service device 21 is for receiving the video data bag that follow shot terminal 10 is uploaded, and the video data bag received is further processed.
Consulting Fig. 4, in a specific embodiment, the video data bag received is further processed and comprises the following steps by video processing service device 21:
Step S101, extracts the additional information of every frame video in video data bag.First, video data bag is unpacked process, obtain all of frame of video, then from frame of video, parse additional information according to predefined agreement.
Step S102, is processed into the form being suitable to storage by video data.Such as, video data itself is carried out certain compression and processes, format transformation etc..It will be appreciated, however, that the process in this step is only for video data itself, processing procedure has no effect on additional information.Even it is to say, the video data after processing, still including in every frame and process front identical additional information.Additionally, step S102 is omissible, say, that after the video extracting every frame in video data bag, directly using the video data bag that receives as storage format.
Step S103, is stored in distributed file storage system by video data and obtains the storage index of correspondence.That is, the video data that will obtain in step S102, or video data bag is stored in distributed file storage system, and distribution file storage system can return storage index, and this storage index is for realizing this video data in access.
Step S104, associates additional information with storage index and is stored in data base.It is for instance possible to use relational data library storage additional information and storage index, and the different information (such as coordinate, shooting time, ID, instruction code, attitude information, label etc.) in additional information can be stored respectively in different field.It is appreciated that if additional information have passed through encryption, in addition it is also necessary to first carry out decryption processing.
Through process above process, it is possible to based on these additional informations, video data retrieved, add up, analyze, output etc. processes, thus providing various video application to user, and concrete process can be realized by application server 24.
Client 30 can include such as smart mobile phone 31, notebook computer 32, desktop computer 33, panel computer 34 and other be arbitrarily not shown in the intelligent terminal in Fig. 1, for instance intelligent glasses, the augmented reality helmet, wearable smart machine etc..
Client 30 and application server 24 interact, such that it is able to use the various video applications that application server 24 provides.Answer scene description as follows below with reference to concrete.
Consulting Fig. 5, in a concrete application scenarios, certain follow shot terminal 10 is moved to position B from position A, and in the process, follow shot terminal 10 is shooting always and uploading the packet with additional information frame of video to cloud server system 20.The packet received can be transmitted to client by cloud server system 20.Certainly, follow shot terminal 10 can also adopt point-to-point mode that packet is transmitted directly to client.Client parses location information from the packet received, and can generate trajectory according to location information in electronic chart 301, and meanwhile, client can also export video pictures 302 simultaneously.So, the user of client both can pass through electronic chart 301 and understand the position of follow shot terminal 10 in real time, the video pictures that current shooting arrives can be watched again in real time by video pictures 302, watch while achieving shift position and real-time pictures, be particularly suited for the real-time tracing to certain target.But it is understood that, this tracking just for same dollying terminal 10 and photographer's identity identical time just meaningful.
Further, when including the attitude information of follow shot terminal 10 in additional information, it is also possible to show the visual angle of video in electronic chart 301.
It is appreciated that, in electronic chart 301, each o'clock in trajectory is corresponding to a coordinate, when user clicks certain point in trajectory, the coordinate corresponding to clicking point can be got according to default mapping relations, then just can find the information frame of video closest to this coordinate in location in all of frame of video received, and video pictures 302 can be switched to this frame of video.
In above-mentioned application scenarios, client is at the video checking that certain specific follow shot terminal 10 shoots, but, in electronic chart, the form of additional video service is not limited to above-mentioned this mode, such as, in the application scenarios that another is concrete, cloud server system 20 can be provided in the Map Service of line, provides electronic map data to client.
Consulting Fig. 6, run the electronic chart application program having correspondence in client 30, it obtains electronic map data from cloud server system 20 and is shown in interface 61.As shown in Figure 6, when activated, interface 61 can show menu 62, menu 62 has various additional function, the such as entrance of " panorama preview ", when user triggers " panorama preview " function, electronic chart application program obtains the coordinate of user's click location, according to this Coordinate generation preview request, and preview request is sent to cloud server system 20.
Consulting Fig. 7, the flow process processing panorama preview request beyond the clouds in server system 20 comprises the following steps:
Step S201, receives panorama preview request.
Step S202, parses preview coordinate asking from described panorama preview.
Preview coordinate refers to user when triggering panorama preview function, the coordinate of user's click location that electronic chart application program obtains, or makes the coordinate of the point of other mode labellings.
Step S203, according to the video data that the additional information retrieval of described preview coordinate and video data mates.
As it has been described above, storage has the additional information (at least including address information) of all video datas in data base, therefore, it can the additional information of retrieval and preview coordinate matching, obtain corresponding video data then through by these additional informations.
Due at same position, likely there is the video data of multiple follow shot terminal taking, therefore, at Search Results is current, can according to certain order, for instance, the time of shooting, image definition etc., Search Results is ranked up, in the result after sequence, then obtains the video data of at least one follow shot terminal taking as retrieval result.
It should be noted that when user's preview panorama in electronic chart, can only include a frame in the video data of transmission, and all of video data all need not be transmitted, such that it is able to reduce transmission volume.
Additionally, in the searching step of step S203, it is possible to the video data carrying out retrieving must be that those users that have been taken authorize disclosed video data.
The data that retrieval obtains are sent to client for displaying by step S204.
Client is after receiving the video data that cloud server system 20 sends, according to predefined agreement, video data unpacks (if having compression), deciphering (if having encryption) etc. to process, then just can export in interface.
Due to user use panorama preview function time, except position, also can relate to direction, therefore, in preview request, it is also possible to include user select direction.In this case, in the searching step of step S203, except retrieval preview coordinate, it is necessary to the shooting direction of retrieval video, the video data only all mated with direction when coordinate could as the video data of coupling.
According to above-mentioned embodiment, the panorama preview function that the video data of magnanimity follow shot terminal taking realizes in electronic chart can be directly based upon, and special streetscape shooting car need not be adopted to remove shooting streetscape photo, effectively reduce the construction cost of outdoor scene function of browse.
Consulting Fig. 8, in the application scenarios that another is concrete, cloud server system 20 also provides for the video search service based on geographical position.As shown in Figure 8, cloud server system 20 provides the flow process of video search service to comprise the following steps:
Step S301, receives the video search request that client sends;
Step S302, parses searching coordinates and search time in asking from this video search;
Step S303, according to described searching coordinates and the video data searching for coupling search time;
Step S304, is sent to client by the video data searched and is shown;And
Step S305, watches the sequence feeding back the video data that amendment different video camera terminal shoots in video process according to user.
According to this mode, when focus incident occurs in certain place, special shooting need not be carried out, as long as providing locale and time, it is possible to get the video data of correspondence;And ensure that what user priority watched is most interested video content according to the sequence of the feedback result correction video data in user's watching process.
According to these various embodiments above, the geographical location information being mainly based upon in video data realizes the retrieval of video data, coupling, and realizes the application scenarios of concrete video on this basis.But, the additional information in video data is not limited to these application scenarios.
Such as, in the process of follow shot terminal 10 shooting uploaded videos data, user can also input some instructions, for instance share, break in traffic rules and regulations is reported, report to the police, report insurance, first aid etc..The input method of instruction can be input either directly through modes such as the button in follow shot terminal 10, touch screens, it is also possible to is inputted by the mobile electronic terminal 30 being connected with follow shot terminal 10.The code of these instructions can be added in the packet of frame of video.Correspondingly, cloud server system 20 is after receiving the packet of frame of video, these instruction codes can be parsed from which, and perform corresponding data handling procedure, or the server of the third-party institution that these instructions are transmitted to other is further processed.
As it is shown in figure 9, the third-party institution herein comprises the steps that from media platform, social network-i i-platform, break in traffic rules and regulations processing mechanism, the police, insurance institution, emergency authorities etc..
Such as, when receiving when sharing instruction of user, cloud server system 20 can intercept the video clips of predetermined length (can from receiving and sharing the instruction moment) or designated length automatically, shared to follow shot terminal 10 phase bind in media platform or social network-i i-platform (such as wechat circle of friends, QQ space, microblogging etc.).So, when user encounters interesting event, beautiful landscape or other any users want the content shared, namely can realize a key and the video sharing of shooting is gone out.
Further, user's content for sharing, it is also possible to add label, character introduction etc..When follow shot terminal 10 includes inputting interface, user can directly input in follow shot terminal 10, and follow shot terminal 10 also may not include inputting interface, at this point it is possible to inputted by the mobile electronic terminal bound with follow shot terminal 10.These labels, character introduction can be stored by cloud server system 20 and for carrying out the retrieval of video.
When receiving break in traffic rules and regulations report command, cloud server system 20 can automatically intercept video segment one section relevant and send to the server of break in traffic rules and regulations processing mechanism, so, user can realize key report traffic offence, a violation phenomenon by follow shot terminal 10.
When receiving alarm command, cloud server system 20 can automatically intercept video segment one section relevant and send to the Alarm Server of warning accepting institution.So, user can realize a key by follow shot terminal 10 and report to the police, and owing to directly containing location information in video requency frame data bag, it is possible to facilitate the position of the timely locating alarming of the police.
When receiving report insurance instruction, cloud server system 20 can automatically intercept video segment one section relevant and send to the server of insurance institution.So, user can realize a key report insurance by follow shot terminal 10, and due to the fact at the video scene of can effectively reducing, insurance institution need not go to scene can realize long-range Claims Resolution service.
When receiving first aid instruction, cloud server system 20 can automatically intercept video segment one section relevant and send to the server of emergency authorities, and such user can realize the function of a key calling immediate care by follow shot terminal 10.And owing to directly containing location information in video requency frame data bag, it is possible to allow the rapid location call position of emergency authorities, reduce position time for communication.
According to above-mentioned embodiment, instruction can also be directly embedded in the additional information of video data, allow cloud server system 20 perform data handling procedure and the function of correspondence so that video data can be more widely used.
As it has been described above, user is when sharing the video of follow shot terminal 10 shooting, it is possible to input label or character introduction, beyond the clouds in server system 20, based on these character introductions, it is also possible to realize discovery automatically and the excavation of focus incident and associated video.
Consult Figure 10, comprise the following steps with the focus incident of character introduction and the automatic mining process of associated video based on video tab:
Step S301, extracts key word from the video tab received and character introduction.
Video tab generally can directly as key word.Character introduction can carry out the step such as word segmentation processing, word frequency statistics, therefrom extracts key word.
Step S302, adds up the frequency of occurrences and/or other parameters of key word respectively based on geographical position.
Location information is also included, so, it is possible to add up, based on geographical position, frequency and other parameters that certain key word occurs due in the additional information that sends with video data simultaneously.Such as speed that can include density, increase of other parameters herein etc..
Step S303, when the frequency that certain key word of certain place occurs and/or other parameters exceed default threshold value, issues the content of the video relevant to this key word based on this place in a content delivering system.
When certain key word is higher than preset value in the frequency that the three unities occurs and/or other parameters, it is possible to be considered as this place and there occurs certain focus incident, or there is the content of focus.The video content that now can issue this place relevant to this key word in a content delivering system browses for user.Content delivering system herein, for instance be a video website, APP or other guide delivery system.
The video content issued, namely can be the video captured by single follow shot terminal, it is also possible to be that the video data captured by multiple follow shot terminal 10 is cropped and forms.
According to above-mentioned this mode, it is possible to automatically finding the event of various hot topics, scene, content in life, and directly present to user with the content of video, shooting team that need not be special removes shooting.
The above, it it is only presently preferred embodiments of the present invention, not the present invention is done any pro forma restriction, although the present invention discloses as above with preferred embodiment, but it is not limited to the present invention, any those skilled in the art, without departing within the scope of technical solution of the present invention, when the technology contents of available the disclosure above makes a little change or is modified to the Equivalent embodiments of equivalent variations, in every case it is without departing from technical solution of the present invention content, according to any brief introduction amendment that above example is made by the technical spirit of the present invention, equivalent variations and modification, all still fall within the scope of technical solution of the present invention.
Claims (12)
1. based on the method for processing video frequency in geographical position, being applied in follow shot terminal, described follow shot terminal includes photographic head, it is characterised in that described method includes:
Call described photographic head to shoot to obtain video data;
Current video frame to be sent is obtained according to described video data;
Obtaining the additional information corresponding with this current video frame, described additional information at least includes the location information that described follow shot terminal is current;
Described additional information is added in the packet of described current video frame;And
The packet being added with the current video frame of additional information is sent to the client of cloud server or far-end, so that frame of video is processed by the client of described cloud server or far-end according to the location information in the packet received.
2. the method for processing video frequency based on geographical position as claimed in claim 1, it is characterised in that also include: detect the anglec of rotation of described photographic head when shooting in real time;Described additional information also includes the anglec of rotation of described photographic head.
3. the method for processing video frequency based on geographical position as claimed in claim 1, it is characterised in that also include: obtain the user identification information of the photographer corresponding with described current video frame;Described additional information also includes described user identification information.
4. the method for processing video frequency based on geographical position as claimed in claim 1, it is characterised in that also include: obtain the shooting time of described current video frame when shooting in real time;Described additional information also includes described shooting time.
5. the method for processing video frequency based on geographical position as claimed in claim 1, it is characterised in that also include: obtain the instruction of user's input;Described additional information also includes described shooting time.
6. the method for processing video frequency based on geographical position as claimed in claim 1, it is characterised in that also include: receive the label with user's input and/or character introduction;Described additional information also includes described label and/or character introduction.
7. the method for processing video frequency based on geographical position as claimed in claim 1, it is characterised in that also include: at least part of content of described additional information is encrypted, and described additional information at least includes the content through encryption.
8. the method for processing video frequency based on geographical position as claimed in claim 1, it is characterised in that also include: calculate corresponding check information according to the video data of described present frame;Described in the content of encryption, include described check information.
9. the method for processing video frequency based on geographical position, it is characterised in that including:
Receive the video data bag that follow shot terminal sends;
Parsing the additional information of video data and correspondence from described video data bag, described additional information at least includes the location information that described follow shot terminal is current;
Store described video data and obtain the index of correspondence;
Described additional information is associated storage with described index;And
Based on described additional information retrieval the video data obtaining correspondence, and provide video data services based on the video data retrieved to client.
10. the method for processing video frequency based on geographical position as claimed in claim 9, it is characterised in that also include: parse the instruction corresponding with each frame of video from described video data bag;And perform the data handling procedure corresponding with described instruction.
11. the method for processing video frequency based on geographical position as claimed in claim 10, it is characterised in that described data handling procedure includes:
Intercept the video clips of predetermined length or designated length, shared to described follow shot terminal bind mutually from media platform or social network-i i-platform;Or
Intercept video segment one section relevant to send to the server of break in traffic rules and regulations processing mechanism;Or
Automatically intercept video segment one section relevant to send to the Alarm Server of warning accepting institution;Or
Automatically intercept video segment one section relevant to send to the server of emergency authorities.
12. the method for processing video frequency based on geographical position as claimed in claim 9, it is characterised in that also include: parse label and/or the character introduction of video from described video data bag;
Described label and/or character introduction are stored;
Key word is extracted from described label and/or character introduction;
Based on geographical position, described key word is carried out statistical analysis;
Obtain relevant video data when the statistical value of certain key word in certain geographical position exceedes predetermined threshold value, and in a content delivering system, issue the relevant video data of described acquisition.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610147581.0A CN105812660A (en) | 2016-03-15 | 2016-03-15 | Video processing method based on geographic position |
PCT/CN2016/077182 WO2017156793A1 (en) | 2016-03-15 | 2016-03-24 | Geographic location-based video processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610147581.0A CN105812660A (en) | 2016-03-15 | 2016-03-15 | Video processing method based on geographic position |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105812660A true CN105812660A (en) | 2016-07-27 |
Family
ID=56468429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610147581.0A Pending CN105812660A (en) | 2016-03-15 | 2016-03-15 | Video processing method based on geographic position |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105812660A (en) |
WO (1) | WO2017156793A1 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108109188A (en) * | 2018-01-12 | 2018-06-01 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108460037A (en) * | 2017-02-20 | 2018-08-28 | 北京金奔腾汽车科技有限公司 | A method of stroke video is preserved and retrieved based on geographical location |
CN108833767A (en) * | 2018-03-28 | 2018-11-16 | 深圳市语图科技有限公司 | A kind of positioning system and method applied to record motion profile |
CN110019628A (en) * | 2017-12-27 | 2019-07-16 | 努比亚技术有限公司 | Localization method, mobile terminal and computer readable storage medium |
CN111327860A (en) * | 2020-01-21 | 2020-06-23 | 成都纵横自动化技术股份有限公司 | Synchronous transmission method for figures and electronic equipment |
CN111353168A (en) * | 2020-02-27 | 2020-06-30 | 闻泰通讯股份有限公司 | Multimedia file management method, device, equipment and storage medium |
CN111444385A (en) * | 2020-03-27 | 2020-07-24 | 西安应用光学研究所 | Electronic map real-time video mosaic method based on image corner matching |
CN111770107A (en) * | 2020-07-07 | 2020-10-13 | 广州通达汽车电气股份有限公司 | Streaming media transmission method, system, storage medium and computer equipment for bearing dynamic data |
CN112004046A (en) * | 2019-05-27 | 2020-11-27 | 中兴通讯股份有限公司 | Image processing method and device based on video conference |
CN114326764A (en) * | 2021-11-29 | 2022-04-12 | 上海岩易科技有限公司 | Rtmp transmission-based smart forestry unmanned aerial vehicle fixed-point live broadcast method and unmanned aerial vehicle system |
CN114422856A (en) * | 2022-01-07 | 2022-04-29 | 北京达佳互联信息技术有限公司 | Video data verification method, device, equipment and storage medium |
CN115455275A (en) * | 2022-11-08 | 2022-12-09 | 广东卓维网络有限公司 | Video processing system fusing inspection equipment |
WO2023273432A1 (en) * | 2021-06-28 | 2023-01-05 | 惠州Tcl云创科技有限公司 | Intelligent identification-based media file labeling method and apparatus, device, and medium |
CN115695924A (en) * | 2021-07-30 | 2023-02-03 | 瑞庭网络技术(上海)有限公司 | Data processing method, client, server, and computer-readable recording medium |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163050B (en) * | 2018-07-23 | 2022-09-27 | 腾讯科技(深圳)有限公司 | Video processing method and device, terminal equipment, server and storage medium |
CN113222637A (en) * | 2021-02-26 | 2021-08-06 | 深圳前海微众银行股份有限公司 | Architecture method, device, equipment, medium and program product of store visitor information |
CN113704554B (en) * | 2021-07-13 | 2024-03-29 | 湖南中惠旅智能科技有限责任公司 | Video retrieval method and system based on electronic map |
CN114040006B (en) * | 2021-11-01 | 2024-02-27 | 北京流通宝数据科技服务有限公司 | Multi-mobile terminal data sharing method and system based on digital asset management |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867730A (en) * | 2010-06-09 | 2010-10-20 | 马明 | Multimedia integration method based on user trajectory |
CN102289520A (en) * | 2011-09-15 | 2011-12-21 | 山西四和交通工程有限责任公司 | Traffic video retrieval system and realization method thereof |
CN103686239A (en) * | 2013-12-11 | 2014-03-26 | 深圳先进技术研究院 | Network sharing crime evidence obtaining system and method based on location videos |
CN104679873A (en) * | 2015-03-09 | 2015-06-03 | 深圳市道通智能航空技术有限公司 | Aircraft tracing method and aircraft tracing system |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6741790B1 (en) * | 1997-05-29 | 2004-05-25 | Red Hen Systems, Inc. | GPS video mapping system |
JP3725134B2 (en) * | 2003-04-14 | 2005-12-07 | 株式会社エヌ・ティ・ティ・ドコモ | Mobile communication system, mobile communication terminal, and program. |
KR101518829B1 (en) * | 2008-06-17 | 2015-05-11 | 삼성전자주식회사 | Method and Apparatus for recording and playing moving images including location information |
CN103716584A (en) * | 2013-11-30 | 2014-04-09 | 南京大学 | Context sensing-based intelligent mobile terminal field monitoring method |
CN103984710B (en) * | 2014-05-05 | 2017-07-18 | 深圳先进技术研究院 | Video interactive querying method and system based on mass data |
CN105022801B (en) * | 2015-06-30 | 2018-06-22 | 北京奇艺世纪科技有限公司 | A kind of hot topic video mining method and device |
-
2016
- 2016-03-15 CN CN201610147581.0A patent/CN105812660A/en active Pending
- 2016-03-24 WO PCT/CN2016/077182 patent/WO2017156793A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101867730A (en) * | 2010-06-09 | 2010-10-20 | 马明 | Multimedia integration method based on user trajectory |
CN102289520A (en) * | 2011-09-15 | 2011-12-21 | 山西四和交通工程有限责任公司 | Traffic video retrieval system and realization method thereof |
CN103686239A (en) * | 2013-12-11 | 2014-03-26 | 深圳先进技术研究院 | Network sharing crime evidence obtaining system and method based on location videos |
CN104679873A (en) * | 2015-03-09 | 2015-06-03 | 深圳市道通智能航空技术有限公司 | Aircraft tracing method and aircraft tracing system |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460037A (en) * | 2017-02-20 | 2018-08-28 | 北京金奔腾汽车科技有限公司 | A method of stroke video is preserved and retrieved based on geographical location |
CN110019628A (en) * | 2017-12-27 | 2019-07-16 | 努比亚技术有限公司 | Localization method, mobile terminal and computer readable storage medium |
CN110019628B (en) * | 2017-12-27 | 2023-12-29 | 努比亚技术有限公司 | Positioning method, mobile terminal and computer readable storage medium |
CN108109188A (en) * | 2018-01-12 | 2018-06-01 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN108109188B (en) * | 2018-01-12 | 2022-02-08 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN108833767A (en) * | 2018-03-28 | 2018-11-16 | 深圳市语图科技有限公司 | A kind of positioning system and method applied to record motion profile |
CN112004046A (en) * | 2019-05-27 | 2020-11-27 | 中兴通讯股份有限公司 | Image processing method and device based on video conference |
CN111327860A (en) * | 2020-01-21 | 2020-06-23 | 成都纵横自动化技术股份有限公司 | Synchronous transmission method for figures and electronic equipment |
CN111353168A (en) * | 2020-02-27 | 2020-06-30 | 闻泰通讯股份有限公司 | Multimedia file management method, device, equipment and storage medium |
CN111444385A (en) * | 2020-03-27 | 2020-07-24 | 西安应用光学研究所 | Electronic map real-time video mosaic method based on image corner matching |
CN111444385B (en) * | 2020-03-27 | 2023-03-03 | 西安应用光学研究所 | Electronic map real-time video mosaic method based on image corner matching |
CN111770107A (en) * | 2020-07-07 | 2020-10-13 | 广州通达汽车电气股份有限公司 | Streaming media transmission method, system, storage medium and computer equipment for bearing dynamic data |
WO2023273432A1 (en) * | 2021-06-28 | 2023-01-05 | 惠州Tcl云创科技有限公司 | Intelligent identification-based media file labeling method and apparatus, device, and medium |
CN115695924A (en) * | 2021-07-30 | 2023-02-03 | 瑞庭网络技术(上海)有限公司 | Data processing method, client, server, and computer-readable recording medium |
CN114326764A (en) * | 2021-11-29 | 2022-04-12 | 上海岩易科技有限公司 | Rtmp transmission-based smart forestry unmanned aerial vehicle fixed-point live broadcast method and unmanned aerial vehicle system |
CN114422856A (en) * | 2022-01-07 | 2022-04-29 | 北京达佳互联信息技术有限公司 | Video data verification method, device, equipment and storage medium |
CN114422856B (en) * | 2022-01-07 | 2024-06-04 | 北京达佳互联信息技术有限公司 | Video data verification method, device, equipment and storage medium |
CN115455275A (en) * | 2022-11-08 | 2022-12-09 | 广东卓维网络有限公司 | Video processing system fusing inspection equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2017156793A1 (en) | 2017-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105812660A (en) | Video processing method based on geographic position | |
CN105975570B (en) | Video searching method and system based on geographical location | |
CN106453924B (en) | A kind of image capturing method and device | |
KR101753031B1 (en) | Mobile terminal and Method for setting metadata thereof | |
KR101788598B1 (en) | Mobile terminal and information security setting method thereof | |
CN105827959A (en) | Geographic position-based video processing method | |
CN108111874B (en) | file processing method, terminal and server | |
CN109844734B (en) | Picture file management method, terminal and computer storage medium | |
CN104881296A (en) | iOS system based picture deletion method and device | |
KR20130023074A (en) | Method and apparatus for performing video communication in a mobile terminal | |
US20120046042A1 (en) | Apparatus and method for power control in geo-tagging in a mobile terminal | |
WO2014114144A1 (en) | Method, server and terminal for information interaction | |
CN106453056A (en) | Mobile terminal and method for safely sharing picture | |
CN105933651B (en) | Method and apparatus based on target route jumper connection video | |
CN106534552B (en) | Mobile terminal and its photographic method | |
US11416571B2 (en) | Searchability of incident-specific social media content | |
CN104735259B (en) | Mobile terminal acquisition parameters method to set up, device and mobile terminal | |
CN106657950A (en) | Projection device management device, method and projection data sharing device | |
JP7080336B2 (en) | Methods and systems for sharing items in media content | |
CN104732218B (en) | The method and device that image is shown | |
CN103826060A (en) | Photographing method and terminal | |
KR101420884B1 (en) | Method and system for providing image search service for terminal location | |
US20080114726A1 (en) | Method to query cell phones for pictures of an event | |
CN105812572B (en) | Image saving method and terminal | |
KR20210090920A (en) | Method and Apparatus for Creating and Retrieving CCTV Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160727 |
|
RJ01 | Rejection of invention patent application after publication |