CN105975570A - Geographic position-based video search method and system - Google Patents
Geographic position-based video search method and system Download PDFInfo
- Publication number
- CN105975570A CN105975570A CN201610288439.8A CN201610288439A CN105975570A CN 105975570 A CN105975570 A CN 105975570A CN 201610288439 A CN201610288439 A CN 201610288439A CN 105975570 A CN105975570 A CN 105975570A
- Authority
- CN
- China
- Prior art keywords
- video
- geographical position
- mapping table
- keyword
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000013507 mapping Methods 0.000 claims description 72
- 239000000284 extract Substances 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 22
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 230000015654 memory Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 230000008878 coupling Effects 0.000 description 6
- 238000010168 coupling process Methods 0.000 description 6
- 238000005859 coupling reaction Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000000712 assembly Effects 0.000 description 4
- 238000000429 assembly Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000010295 mobile communication Methods 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- DMBHHRLKUKUOEG-UHFFFAOYSA-N diphenylamine Chemical compound C=1C=CC=CC=1NC1=CC=CC=C1 DMBHHRLKUKUOEG-UHFFFAOYSA-N 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Library & Information Science (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention relates to a geographic position-based video search method and system. The method comprises the following steps: receiving a video search request sent by a client; analyzing a target keyword from the video search request; obtaining a geographic position corresponding to the target keyword; searching a video matched with the geographic position; and generating video display data according to the search video and returning the video display data to the client. According to the method, the target keywords input by the users are mapped into one or more geographic positions when video search is carried out, and then matched videos are searched on the basis of the geographic positions; and an accurate video search method is provided, so as to satisfy the accurate video search requirement.
Description
Technical field
The present invention relates to video search technique, particularly relate to a kind of video search skill based on geographical position
Art and system.
Background technology
Video file on network does not has relatedness with geographical position at present, unless appended by video file
Label and be annotated with clear and definite event or positional information, user is difficult to find in the video file of magnanimity
The file destination relevant to venue location.Meanwhile, user the most only knows several with event title phase
The key word closed, and do not know the further the most definite information such as the scene of this event, time,
That user haves no way of handling especially and finds required video file.
Summary of the invention
In view of this, it is necessary to a kind of video searching method based on geographical position and system are provided, its
The problem that in prior art, video is difficult to be refined retrieval can be solved.
A kind of video searching method based on geographical position, including:
Receive the video search request that client sends;
Target keywords is parsed from described video search is asked;
Obtain the geographical position that described target keywords is corresponding;
The video that retrieval is mated with described geographical position;
Generate video display data according to the video retrieved and return to described client.
In one embodiment, above-mentioned method also includes: build between keyword and geographical position
Mapping table;
Geographical position corresponding to the described target keywords of described acquisition includes: obtain according to described mapping table
The geographical position that described target keywords is corresponding.
In one embodiment, the mapping table between described structure keyword and geographical position includes:
Mapping table described in interest point data library initialization according to electronic chart;
Capture webpage on the internet, from described webpage, extract front-page keyword, according to described webpage
The geographical location information of keyword updates described mapping table;And/or
Receive video and label that user uploads, in institute's video, parse geographical location information, and
Mapping table described in geographical location information according to described video and tag update.
In one embodiment, said method also includes: update institute according to the video access data of user
State keyword and the mapping relations in geographical position in mapping table.
In one embodiment, described video during the video that above-mentioned retrieval is mated with described geographical position
It is ranked up by one or several in distance, video capture time, video access amount.
A kind of video searching system based on geographical position, including:
Request receiver module, for receiving the video search request that client sends;
Request analysis module, parses target keywords for asking from described video search;
Position acquisition module, for obtaining the geographical position that described target keywords is corresponding;
Video frequency searching module, the video mated with described geographical position for retrieval;
Video returns module, for generating video display data according to the video retrieved and returning to institute
State client.
In one embodiment, above-mentioned video searching system also includes: mapping table builds module, uses
In the mapping table built between keyword and geographical position;
The geographical position that described position acquisition module obtains described target keywords corresponding includes: according to institute
The mapping table stating mapping table structure module construction obtains the geographical position that described target keywords is corresponding.
In one embodiment, described mapping table builds between module construction keyword and geographical position
Mapping table includes:
Mapping table described in interest point data library initialization according to electronic chart;
Capture webpage on the internet, from described webpage, extract front-page keyword, according to described webpage
The geographical location information of keyword updates described mapping table;And/or
Receive video and label that user uploads, in institute's video, parse geographical location information, and
Mapping table described in geographical location information according to described video and tag update.
In one embodiment, above-mentioned video searching system also includes: mapping table more new module, uses
Keyword and the mapping in geographical position in the video access data according to user update described mapping table
Relation.
In one embodiment, described video frequency searching module retrieves the video mated with described geographical position
Shi Suoshu video is ranked up by one or several in distance, video capture time, video access amount.
According to above-mentioned technical scheme, when carrying out video search, it it is the target critical that user is inputted
Word is mapped as one or more geographical position, is then based on the video of geographical position removal search coupling, carries
Supply a kind of precise search method of video, meet the precise search demand of video.But also according to
The video access data at family constantly update the mapping table between keyword and geographical position so that keyword
And the mapping relations between geographical position are more and more accurate, and whole video frequency search system is dynamically
Renewal process in, the most automatically can adjust along with the navigation interest of user.
For the above and other objects, features and advantages of the present invention can be become apparent, cited below particularly
Preferred embodiment, and coordinate institute's accompanying drawings, it is described in detail below.
Accompanying drawing explanation
The frame of the video information process system based on geographical position that Fig. 1 provides for the embodiment of the present invention
Structure schematic diagram.
Fig. 2 is the structured flowchart of the follow shot terminal of the video information process system of Fig. 1.
Fig. 3 is the data structure schematic diagram of the video data that the follow shot terminal of Fig. 2 is uploaded.
Fig. 4 is that the video information process system of Fig. 1 processes the video data that follow shot terminal is uploaded
Schematic flow sheet.
The flow chart of the video searching method that Fig. 5 provides for the embodiment of the present invention.
Fig. 6 is the interface schematic diagram of the method for Fig. 5.
Fig. 7 be Fig. 5 method in the schematic diagram of map listing.
The flow chart of the video searching method that Fig. 8 provides for another embodiment of the present invention.
The Search Results of the video searching method that Fig. 9 to Figure 11 provides for the embodiment of the present invention shows and shows
It is intended to.
The module map of the video searching system that Figure 12 provides for the embodiment of the present invention.
Detailed description of the invention
By further illustrating the technological means and merit that the present invention taked by realizing predetermined goal of the invention
Effect, below in conjunction with accompanying drawing and preferred embodiment, to according to the detailed description of the invention of the present invention, structure,
Feature and effect thereof, after describing in detail such as.
Refering to Fig. 1, at its video information based on geographical position for first embodiment of the invention offer
The configuration diagram of reason system.As it is shown in figure 1, video information process system 100 comprises the steps that movement
Camera terminal 10, cloud server system 20 and client 30.
Follow shot terminal 10, specifically, can be the mobile electronic terminal example arbitrarily with photographic head
Such as mobile phone, panel computer, unmanned plane etc..Refering to Fig. 2, it is the knot of follow shot terminal 10
Structure schematic diagram.Follow shot terminal 10 includes memorizer 102, storage control 104, one or many
Individual (only illustrating one in figure) processor 106, Peripheral Interface 108, mixed-media network modules mixed-media 110, audio frequency
Circuit 111, GPS (Global Positioning System, global positioning system) module 112, biography
Sensor 114, photographing module 116 and power module 122.These assemblies lead to by one or more
The news mutual communication of bus/holding wire.
It will appreciated by the skilled person that the structure shown in Fig. 2 is only signal, it is the most right
The structure of follow shot terminal 10 causes restriction.Such as, follow shot terminal 10 may also include than figure
Assembly more or less shown in 2, or there is the configuration different from shown in Fig. 2.
Memorizer 102 can be used for storing software program and module, such as each side in the embodiment of the present invention
Method and programmed instruction/module corresponding to device, processor 106 is stored in memorizer 102 by operation
Interior software program and module, thus perform the application of various function and data process.
Memorizer 102 can include high speed random access memory, may also include nonvolatile memory, such as one
Individual or multiple magnetic storage device, flash memory or other non-volatile solid state memories.At some
In example, memorizer 102 can farther include the memorizer remotely located relative to processor 106,
These remote memories can be connected to above-mentioned server by network.The example of above-mentioned network include but
It is not limited to the Internet, intranet, LAN, mobile radio communication and combinations thereof.Processor 106
And the access of memorizer 102 can be entered under the control of storage control 104 by other possible assemblies
OK.
Various input/output devices are coupled to processor 106 by Peripheral Interface 108.Processor 106
Various softwares in run memory 102, instruct above-mentioned server and perform various functions and count
According to process.In certain embodiments, Peripheral Interface 108, processor 106 and storage control 104
Can realize in one single chip.In some other example, they can be respectively by independent chip
Realize.
Mixed-media network modules mixed-media 110 is used for receiving and sending network signal.Above-mentioned network signal can include wireless
Signal.In one embodiment, mixed-media network modules mixed-media 110 essence is radio-frequency module, receives and sends electricity
Magnetic wave, it is achieved electromagnetic wave is changed with the mutual of the signal of telecommunication, thus enters with communication network or other equipment
Row communication.Radio-frequency module can include the various existing component for performing these functions, such as,
Antenna, RF transceiver, digital signal processor, encryption/deciphering chip, subscriber identity module (SIM)
Card, memorizer etc..Radio-frequency module can be with various networks such as the Internet, intranet, wireless network
Network carries out communication or carries out communication by wireless network and other equipment.Above-mentioned wireless network can wrap
Include cellular telephone networks, WLAN or Metropolitan Area Network (MAN).Above-mentioned wireless network can use various
Communication standard, agreement and technology, include, but are not limited to global system for mobile communications (Global System
For Mobile Communication, GSM), enhancement mode mobile communication technology (Enhanced Data
GSM Environment, EDGE), Wideband CDMA Technology (wideband code division
Multiple access, W-CDMA), CDMA (Code division access, CDMA),
Tdma (Time Division Multiple Access, TDMA), adopting wireless fidelity technology
(Wireless Fidelity, WiFi) is (such as IEEE-USA's standard IEEE
802.11a, IEEE 802.11b, IEEE802.11g and/or IEEE 802.11n), the networking telephone
(Voice Over Internet Protocol, VoIP), worldwide interoperability for microwave access (Worldwide
Interoperability for Microwave Access, Wi-Max), other are for mail, instant
Communication and the agreement of short message, and any other suitable communications protocol, even can include that those are worked as
Before the agreement that is developed not yet.
Voicefrequency circuit 111 and the interface that follow shot terminal 10 recording is provided.Specifically, audio-frequency electric
Road 111 receives the signal of telecommunication at mike, converts electrical signals to voice data, and by voice data
It is transferred to processor 102 to be further processed.
GPS module 112 is used for receiving the framing signal that gps satellite is reported, and according to framing signal
Calculate the position of self.Above-mentioned position such as can represent with longitude, latitude and height above sea level.
It is appreciated that, it is achieved the mode of location is not limited to GPS system.Such as, other available satellites
Alignment system also include big-dipper satellite alignment system (Compass Navigation Satellite System,
CNSS) or glonass system (Global Navigation Satellite System,
GLONASS).Additionally, location is also not limited to use satellite positioning tech, such as, also can adopt
By wireless location technology, such as based on wireless base station location technology or the location technology of WIFI.
Now, GPS module 112 can be replaced by corresponding module, or directly via processor 102
Perform specific finder to realize.
The example of sensor 114 includes, but are not limited to: optical sensor, attitude transducer and other
Sensor.Wherein, ambient light sensor can with the light and shade of sense ambient light, and then can to shoot into
Row regulation.Attitude transducer such as can include acceleration transducer, gravitometer, gyroscope etc., its
The spatial attitude such as anglec of rotation etc. in all directions of follow shot terminal 10 can be detected.Permissible
Understanding, the anglec of rotation in all directions of follow shot terminal 10 both correspond to shooting direction.Its
He can include barometer, drimeter, thermometer etc. by sensor.
Photographing module 116 is used for shooting photo or video.Photo or the video of shooting can store
To memorizer 104, and can be sent by mixed-media network modules mixed-media 110.Photographing module 116 specifically can include
The assemblies such as camera lens module, CIS and flash lamp.Camera lens module is for the target being taken
Imaging, and imaging is mapped in CIS.CIS is for receiving from camera lens
The light of module, it is achieved photosensitive, to record image information.Specifically, CIS can be based on mutually
Benefit metal-oxide semiconductor (MOS) (Complementary Metal Oxide Semiconductor,
CMOS), charge coupled cell (Charge-coupled Device, CCD) or other images
Sensing principle realizes.Flash lamp is for being exposed compensating when shooting.In general, it is used for moving
The flash lamp of camera terminal 10 can be light-emittingdiode (Light Emitting Diode, LED) flash of light
Lamp.
Power module 122 is for providing supply of electric power to processor 102 and other each assemblies.Specifically
Ground, power module 122 can include that power-supply management system, one or more power supply are (such as battery or friendship
Stream electricity), charging circuit, power-fail testing circuit, inverter, indicator of the power supply status and its
He arbitrarily to the generation of electric power in follow shot terminal 10, manage and be distributed relevant assembly.
Memorizer 104 internal memory contains software and program module can include operating system 130 and operate in
Application program in operating system 130.Operating system 130 its can include various appointing for managing system
The component software of business (such as memory management, storage device control, power management etc.) and/or driving,
And can communication mutual with various hardware or component software, thus provide the operation ring of other component softwares
Border.Described application program comprises the steps that taking module 131, additional information add module 132, video
Data package module 133 and data transmission blocks 134.
Wherein, taking module 131 is used for calling described photographing module 116 and shoots to obtain video data;
Additional information acquisition module 132 is for obtaining the additional information corresponding with this current video frame and by institute
State additional information to add to described current video frame;Video data package module 133 is for by one
Or the data of multiple frame of video being added with additional information are packed;Data transmission blocks 134 is used
In the video data after packing is sent to cloud server system 20, so that described cloud server
System 20 provides various according to the additional information in the video data received and carries based on this additional information
The information service of confession.
As it is shown on figure 3, in same video data bag, it may include multiple frame of video, and each
Including additional information and the video data of this frame of video in frame of video, video data can use appoints simultaneously
The form (the most H.264 or MPEG4 etc. store) of meaning.
Additional information can include that two classes, a class are editable additional informations, and user can be by specific
Application realize the amendment of this type of information, newly-increased or delete, editable additional information typically may be used
To be used for storing the information of user's input;Another kind of is the most editable additional information, regards once write
Frequently frame, it cannot be edited by user again, and the most editable additional information is generally available and stores reality
Time obtain status information.
In a specific embodiment, above-mentioned editable additional information comprises the steps that user inputs
Label, the information such as character introduction.
In a specific embodiment, above-mentioned editable additional information comprises the steps that user inputs
The code of instruction.The instruction of user's input can include sharing, report etc..
In a specific embodiment, the most editable above-mentioned additional information comprises the steps that location letter
Breath, warp, latitude and the height such as got by GPS module 112.
In a specific embodiment, the most editable above-mentioned additional information comprises the steps that described shifting
The attitude information of dynamic camera terminal 10, such as, follow shot terminal 10 or photographing module 116 exist
The anglec of rotation in all directions.The attitude information of follow shot terminal 10 can pass through sensor 114
Obtain.
In a specific embodiment, the most editable above-mentioned additional information comprises the steps that described working as
The shooting time of front frame of video.
In a specific embodiment, the most editable above-mentioned additional information comprises the steps that video is clapped
The user identification information of the person of taking the photograph.User identification information herein can be such as that user is one
Account number in individual network account system, or other can be the most true in a network account system
Determine the information of user account number.At synchronization, the user of video capture terminal 20, i.e. video are clapped
The person of taking the photograph can be defined to only one people.This user can be and the user of follow shot terminal 10 binding
Account number, or it is authorized to use the user account number of follow shot terminal 10.
In a specific embodiment, the most editable above-mentioned additional information comprises the steps that described working as
The check information of the video data of front frame of video.Described check information e.g. use hash algorithm according to
Described video data is calculated, and may be used for verifying whether described video data is modified.So nothing
How this frame of video of opinion replicates, transmits, all can based on this check information verification video data whether by
Amendment, so that the verity of video data can further confirm that, this gives video as department
Method evidence provides technical guarantee.
For editable additional information, it can be only written partial video frame, such as, for one second
The multiple frame of video produced in (can also be other times length), editable additional information can be only
Write in a fixing frame of video (the such as first frame).This has with editable additional information
The key video sequence frame that can be defined as in this time of frame of video.Adopt in this way, both can be direct
Editable additional information is write, it is also possible to farthest reduce editable additional in frame of video
The memory space that information occupies.
For the most editable additional information, it is typically all acquisition in real time, therefore, it can at every frame
In all write.However it is not limited to this mode, still can be only to write in partial video frame
The most editable additional information.Such as, each second writes the most editable additional in a frame of video
Information.
Additionally, in order to prevent the most editable additional information destroyed or distort, the most editable
Additional information writes frame of video after rivest, shamir, adelman can be used to be encrypted.Such as, often
Identical PKI can be stored in individual video capture terminal 10, utilize this PKI to the most editable
Additional information is encrypted.And the private key corresponding with this PKI only has cloud server system
Just have in 20, say, that only cloud server 10 solution can read and write adding in frame of video
Additional information after close.
As it has been described above, in the video information process system of the present embodiment, in follow shot terminal 10
Video data and above-mentioned additional information is included in the video data passed.
And as it is shown in figure 1, cloud server system 20 can include video processing service device 21, data
Storehouse 22, distributed file storage system 23 and application server 24.
Wherein, video processing service device 21 is for receiving the video data that follow shot terminal 10 is uploaded
Bag, and the video data bag received is further processed.
Refering to Fig. 4, in a specific embodiment, video processing service device 21 is to receiving
Video data bag is further processed and comprises the following steps:
Step S101, extracts the additional information of every frame video in video data bag.First, right
Video data bag carries out unpacking process, obtains all of frame of video, then according to predefined agreement from
Frame of video parses additional information.
Step S102, is processed into the form being suitable to storage by video data.Such as, to video data
Itself carry out certain compression to process, format transformation etc..It will be appreciated, however, that in this step
Process is only for video data itself, and processing procedure has no effect on additional information.It is to say,
Even the video data after Chu Liing, still include in every frame and process front identical additional information.This
Outward, step S102 is omissible, say, that extracting every frame in video data bag
After video, directly using the video data bag that receives as storage format.
Step S103, is stored in distributed file storage system and obtains correspondence by video data
Storage index.That is, the video data that will obtain in step S102, or video data bag is stored in point
In cloth document storage system, distribution file storage system can return storage index, and this storage index is used
In realizing the access to this video data.
Step S104, associates additional information with storage index and is stored in data base.Such as, may be used
To use relational data library storage additional information and storage index, and the different information in additional information
(such as coordinate, shooting time, ID, instruction code, attitude information, label etc.) is permissible
It is stored respectively in different field.It is appreciated that if additional information have passed through encryption, in addition it is also necessary to first
It is decrypted process.
Through above processing procedure, it is possible to based on these additional informations, video data is examined
Rope, add up, analyze, output etc. processes, thus provides various video application to user, and have
The process of body can be realized by application server 24.
Client 30 can include such as smart mobile phone 31, notebook computer 32, desktop computer 33,
Panel computer 34 and other be arbitrarily not shown in the intelligent terminal in Fig. 1, such as intelligent glasses,
The augmented reality helmet, wearable smart machine etc..
Client 30 interacts with application server 24, such that it is able to use application server 24
The various video applications provided.Answer scene description as follows below with reference to concrete.
Refering to Fig. 5, the embodiment of the present invention provides the searching method of a kind of video, and it comprises the following steps:
The video search request that step 201, reception client send.
Refering to Fig. 6, it is the interface of the interior video tour application program run of smart mobile phone 31
Schematic diagram.This video tour application program has a video browsing interface 301, in this video tour
Including a Text Entry 302 in interface 301, it is used for allowing user's input video search key.
Also include button 303 in video clip 301, when button 303 is clicked, the video answered can be generated
Searching request, and video search request is sent to application server 24.In video search request extremely
Include the video frequency searching keyword that user inputs in Text Entry, such as " Shenzhen " less.Certainly,
It is understood that can be not only in video frequency searching keyword in video search request, it is also possible to include
Other are arbitrarily for carrying out the information of video frequency searching, such as time etc..
Step 202, from described video search ask parse target keywords.
After video frequency searching request is sent to application server 24 by smart mobile phone 31, application server
Correspondingly received can ask to video frequency searching, and target pass can be parsed further from video search is asked
Key word.Target keywords can vocabulary, multiple vocabulary, even in short, one section of word etc.,
And it is unrestricted.Such as, in the present embodiment, target keywords is " Shenzhen marathon ".
Step 203, obtain the geographical position that described target keywords is corresponding.
Refering to Fig. 7, video information process system 100 can be safeguarded a mapping table, storage key
Mapping relations between word and geographical position.Geographical position herein, can be a coordinate (such as
Longitude and latitude), can be that (such as one coordinate adds a radius, or a Guan Bi road to a scope
Footpath area defined, and multiple end points that this closed path can use this path define), also may be used
To be a paths.In this mapping table, each keyword can map to multiple geographical position, and
Each geographical position is likely to map to multiple keyword, for some keyword, and its correspondence every
Individual geographical position has a ranking (rank), but this ranking it is not necessary to.
The establishment process of mapping table is as follows: first, and in existing electronic map data, storage has substantial amounts of
POI (Point of Interest, point of interest) information, each POI includes title and right
The geographical position answered, the POI data storehouse of electronic chart can be directly as initial mapping table.
Secondly, capture webpage on the internet, from webpage, extract keyword, and according to keyword it
Between association to non-point of interest keyword, the geography that its initial geographical position is point of interest keyword is set
Position.
Point of interest keyword herein refers to that this keyword can search coupling in POI data storehouse
Point of interest, rather than point of interest keyword refers to not coupling emerging in POI data storehouse of this keyword
Interest point.
Such as, in a news report about Shenzhen marathon race in 2016, utilize existing
Natural language processing technique can extract keyword: Shenzhen marathon, Bay in Shenzhen sports center.
Wherein Shenzhen marathon is non-point of interest keyword, and Bay in Shenzhen sports center is exactly point of interest key
Word, it has the geographical position of correspondence.According to stepS203, Shenzhen horse can be given in the mapping table
Pull loose the geographical position setting initial geographical position as Bay in Shenzhen sports center.
Again, as it has been described above, in video information process system 100, video capture terminal 10 exists
The label information that can set with upload user while uploaded videos, and in video, include geographical position
Confidence ceases, and accordingly, it is possible to update this mapping table, using label as keyword, stores its correspondence
Geographical position.
According to above several ways, it is possible to create above-mentioned mapping table.Hereafter, in step S203
In, it is possible to obtain, by retrieving this mapping table, the geographical position that this target keywords is corresponding.
The video that step 204, retrieval are mated with described geographical position.
As it was previously stated, geographical location information when all including shooting in each video, by by video
Geographical location information, compared with the geographical position of target keywords, can get and described geographical position
The video of coupling.
When geographical position is a coordinate, mate the shooting geographical position referring to video with geographical position
Distance between this coordinate is less than a default allowable error scope;When geographical position is one
During geographic range, mate with geographical position and refer to that the shooting geographical position of video is positioned at this geographic range
In;When geographical position is a path, mate the shooting geographical position referring to video with geographical position
The distance of any point on this path is less than the range of error allowed.
In the present embodiment, the keyword of user's input is " Shenzhen marathon ", according to mapping table pair
The geographical position answered is the geographical position of Bay in Shenzhen sports center, then will obtain in Bay in Shenzhen physical culture
The video of center shooting.It is appreciated that the video that same geographical location shoots may have a lot,
Therefore can also be ranked up filtering according to certain order for retrieval result.For example, it is possible to according to
One or several in following parameter is ranked up: and the distance between geographical position is (preferentially apart from more
Near video), the shooting time of video (video of preferential shooting recently) and video itself
Visit capacity (video that preferential visit capacity is high).
When video information process system 100 also stores the interest setting data having user, it is also possible to
The result retrieved is filtered by the interest setting data according to user.
The video that step 205, basis retrieve generates video display data and returns to described client.
After obtaining the video mated with the geographical position of target keywords, raw according to the video retrieved
Become video display data and return to client.Refering to Fig. 6, client is receiving application server
After 24 return video data, video browsing interface 301 exports video preview 304, works as video preprocessor
Look at 304 clicked after, can enter correspondence video playback interface, foradownloaded video number at server
According to and play.Certainly, video preview 304 is not required, it is also possible to directly according to application server
24 video datas returned are downloaded and are played out.
According to the technical scheme of the present embodiment, when carrying out video search, it it is the target that user is inputted
Keyword is mapped as one or more geographical position, is then based on regarding of geographical position removal search coupling
Frequently, it is provided that a kind of precise search method of video, the precise search demand of video is met.
Refering to Fig. 8, it is the flow chart of video searching method of another embodiment of the present invention.This reality
The video searching method executing example is similar to the method shown in Fig. 5, and its difference is, in step
Also include after S205: step S206, update described mapping table according to the video access data of user
Middle keyword and the mapping relations in geographical position.
The mapping relations in trasaction key herein and geographical position include one or more modes following:
Add the mapping of keyword and geographical position;Amendment keyword and the row of certain geolocation mapping relation
Name (Rank).
When, in the video search result list of certain keyword, user browses and have viewed video A, depending on
Frequently corresponding for A geographical position is position A, and in mapping table not this keyword and position A it
Between mapping, then can add the mapping between keyword and this position A.
Such as, when in the video that user is retrieved by keyword " Shenzhen ", the most popular with users
Video is the video of citizen center, then just can improve keyword " Shenzhen " corresponding to citizen center
The ranking of the mapping relations of position, so, so that during user's search " Shenzhen ", citizen
Before the video at center can come, farthest meet the demand of user.
According to the technical scheme of the present embodiment, the video access data always according to user constantly update key
Mapping table between word and geographical position so that the mapping relations between keyword and geographical position are more come
The most accurate, and whole video frequency search system is in dynamic renewal process, and meeting is clear along with user's
Interest of looking at adjusts the most automatically.
The video searching method of the embodiment of the present invention may also provide the video tour pattern that some are special.Example
As, when the geographical position corresponding with keyword is a coordinate, it is also possible to obtain at this coordinate different
Multiple video datas of time shooting are to client.So, refering to Fig. 9, can in client 30
To provide " timeline " video tour pattern, say, that in video playback interface in 401,
In addition to normal video playback part, may also include a timeline 402, show the plurality of in it
With the summary info of the video data of time shooting, by timeline 402, user can switch to not
With the time, the video of shooting plays out.
When the geographical position corresponding with keyword is a geographic range, it is also possible to obtain this geography model
Enclose interior multiple video data shot to center position from corner to client.So, refering to Figure 10,
" from various visual angles " video tour pattern can be provided, say, that broadcast at video in client 30
Put in interface, the video of multiple angle shot can be play simultaneously, facilitate user to watch different visual angles
Video.
When the geographical position corresponding with keyword is a path, it is also possible to obtain the most
The video of individual shooting is to client.So, refering to Figure 11, client 30 can provide " road
Footpath " video tour pattern, specifically, a path can be shown in an electronic chart, at this
Show multiple video preview on path, click on a video preview, i.e. can switch to play this video
The video that preview is corresponding.Such as, the geographical position that " Shenzhen marathon " keyword is corresponding can be one
Path, when playing video, it is also possible to show the preview of the video of various location on this path, when
When user clicks on preview, i.e. can switch to play this video.
Refering to Figure 12, it shows for the module of the video searching system that another embodiment of the present invention provides
It is intended to.This video searching system includes: mapping table builds module 51, request receiver module 52, asks
Ask parsing module 53, position acquisition module 54, video frequency searching module 55, video return module 56,
And mapping table more new module 57.
Mapping table builds module 51 for building the mapping table between keyword and geographical position.
The video search request that request receiver module 52 sends for receiving client.
Request analysis module 53 parses target keywords for asking from described video search.
Position acquisition module 54 is for obtaining the geographical position that described target keywords is corresponding.
The video that video frequency searching module 55 is mated with described geographical position for retrieval.
Video returns module 56 for generating video display data according to the video retrieved and returning to
Described client.
Mapping table more new module 57 updates in described mapping table for the video access data according to user
Keyword and the mapping relations in geographical position.
According to the technical scheme of the present embodiment, when carrying out video search, it it is the target that user is inputted
Keyword is mapped as one or more geographical position, is then based on regarding of geographical position removal search coupling
Frequently, it is provided that a kind of precise search method of video, the precise search demand of video is met.But also
Video access data according to user constantly update the mapping table between keyword and geographical position so that
Mapping relations between keyword and geographical position are more and more accurate, and whole video frequency search system is
In dynamic renewal process, the most automatically can adjust along with the navigation interest of user.
The above, be only presently preferred embodiments of the present invention, and the present invention not makees any form
On restriction, although the present invention discloses as above with preferred embodiment, but is not limited to this
Bright, any those skilled in the art, in the range of without departing from technical solution of the present invention, on available
The technology contents stating announcement is made a little change or is modified to the Equivalent embodiments of equivalent variations, as long as being
Without departing from technical solution of the present invention content, according to the technical spirit of the present invention, above example is made
Any brief introduction amendment, equivalent variations and modification, all still fall within the range of technical solution of the present invention.
Claims (10)
1. a video searching method based on geographical position, it is characterised in that including:
Receive the video search request that client sends;
Target keywords is parsed from described video search is asked;
Obtain the geographical position that described target keywords is corresponding;
The video that retrieval is mated with described geographical position;
Generate video display data according to the video retrieved and return to described client.
2. video searching method based on geographical position as claimed in claim 1, it is characterised in that
Also include: build the mapping table between keyword and geographical position;
Geographical position corresponding to the described target keywords of described acquisition includes: obtain according to described mapping table
The geographical position that described target keywords is corresponding.
3. video searching method based on geographical position as claimed in claim 2, it is characterised in that
Mapping table between described structure keyword and geographical position includes:
Mapping table described in interest point data library initialization according to electronic chart;
Capture webpage on the internet, from described webpage, extract front-page keyword, according to described webpage
The geographical location information of keyword updates described mapping table;And/or
Receive video and label that user uploads, in institute's video, parse geographical location information, and
Mapping table described in geographical location information according to described video and tag update.
4. video searching method based on geographical position as claimed in claim 1, it is characterised in that
Also include: update keyword and geographical position in described mapping table according to the video access data of user
Mapping relations.
5. video searching method based on geographical position as claimed in claim 1, it is characterised in that
During the video that retrieval is mated with described geographical position, described video is by distance, video capture time, video
One or several in visit capacity is ranked up.
6. a video searching system based on geographical position, it is characterised in that including:
Request receiver module, for receiving the video search request that client sends;
Request analysis module, parses target keywords for asking from described video search;
Position acquisition module, for obtaining the geographical position that described target keywords is corresponding;
Video frequency searching module, the video mated with described geographical position for retrieval;
Video returns module, for generating video display data according to the video retrieved and returning to institute
State client.
7. video searching system based on geographical position as claimed in claim 6, it is characterised in that
Also include: mapping table builds module, for building the mapping table between keyword and geographical position;
The geographical position that described position acquisition module obtains described target keywords corresponding includes: according to institute
The mapping table stating mapping table structure module construction obtains the geographical position that described target keywords is corresponding.
8. video searching system based on geographical position as claimed in claim 7, it is characterised in that
The mapping table that described mapping table builds between module construction keyword and geographical position includes:
Mapping table described in interest point data library initialization according to electronic chart;
Capture webpage on the internet, from described webpage, extract front-page keyword, according to described webpage
The geographical location information of keyword updates described mapping table;And/or
Receive video and label that user uploads, in institute's video, parse geographical location information, and
Mapping table described in geographical location information according to described video and tag update.
9. video searching system based on geographical position as claimed in claim 6, it is characterised in that
Also include: mapping table more new module, update described mapping table for the video access data according to user
Middle keyword and the mapping relations in geographical position.
10. video searching system based on geographical position as claimed in claim 6, it is characterised in that
Described video frequency searching module retrieval mate with described geographical position video time described video by apart from, regard
Frequently one or several in shooting time, video access amount is ranked up.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610288439.8A CN105975570B (en) | 2016-05-04 | 2016-05-04 | Video searching method and system based on geographical location |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610288439.8A CN105975570B (en) | 2016-05-04 | 2016-05-04 | Video searching method and system based on geographical location |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105975570A true CN105975570A (en) | 2016-09-28 |
CN105975570B CN105975570B (en) | 2019-10-18 |
Family
ID=56993668
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610288439.8A Expired - Fee Related CN105975570B (en) | 2016-05-04 | 2016-05-04 | Video searching method and system based on geographical location |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105975570B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106777078A (en) * | 2016-12-13 | 2017-05-31 | 广东中星电子有限公司 | A kind of video retrieval method and system based on information database |
CN108197198A (en) * | 2017-12-27 | 2018-06-22 | 百度在线网络技术(北京)有限公司 | A kind of interest point search method, device, equipment and medium |
WO2018126385A1 (en) * | 2017-01-05 | 2018-07-12 | 深圳市前海中康汇融信息技术有限公司 | Geographic location-based database search method |
CN108415454A (en) * | 2018-02-02 | 2018-08-17 | 福建特力惠信息科技股份有限公司 | A kind of method and terminal of the interpretation of unmanned plane real-time, interactive |
CN108460037A (en) * | 2017-02-20 | 2018-08-28 | 北京金奔腾汽车科技有限公司 | A method of stroke video is preserved and retrieved based on geographical location |
CN108519997A (en) * | 2018-03-07 | 2018-09-11 | 阿里巴巴集团控股有限公司 | The recommendation method and device of related content |
CN109522449A (en) * | 2018-09-28 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | Searching method and device |
CN109598562A (en) * | 2019-01-15 | 2019-04-09 | 深圳市云歌人工智能技术有限公司 | The method, apparatus and electronic equipment of information publication |
CN112650882A (en) * | 2019-10-11 | 2021-04-13 | 杭州海康威视数字技术股份有限公司 | Video acquisition method, device and system |
CN112783986A (en) * | 2020-09-23 | 2021-05-11 | 上海芯翌智能科技有限公司 | Object grouping compiling method and device based on label, storage medium and terminal |
WO2021137095A1 (en) * | 2019-12-31 | 2021-07-08 | International Business Machines Corporation | Geography aware file dissemination |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7555718B2 (en) * | 2004-11-12 | 2009-06-30 | Fuji Xerox Co., Ltd. | System and method for presenting video search results |
CN102193918A (en) * | 2010-03-01 | 2011-09-21 | 汉王科技股份有限公司 | Video retrieval method and device |
CN102695118A (en) * | 2011-03-21 | 2012-09-26 | 腾讯科技(深圳)有限公司 | Method and apparatus of aggregate information presentation of location based service |
CN102946416A (en) * | 2012-10-19 | 2013-02-27 | 北京推博信息技术有限公司 | Method and device for issuing and acquiring multimedia advertisement |
CN104794171A (en) * | 2015-03-31 | 2015-07-22 | 百度在线网络技术(北京)有限公司 | Method and device for marking geographical location information of picture |
-
2016
- 2016-05-04 CN CN201610288439.8A patent/CN105975570B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7555718B2 (en) * | 2004-11-12 | 2009-06-30 | Fuji Xerox Co., Ltd. | System and method for presenting video search results |
CN102193918A (en) * | 2010-03-01 | 2011-09-21 | 汉王科技股份有限公司 | Video retrieval method and device |
CN102695118A (en) * | 2011-03-21 | 2012-09-26 | 腾讯科技(深圳)有限公司 | Method and apparatus of aggregate information presentation of location based service |
CN102946416A (en) * | 2012-10-19 | 2013-02-27 | 北京推博信息技术有限公司 | Method and device for issuing and acquiring multimedia advertisement |
CN104794171A (en) * | 2015-03-31 | 2015-07-22 | 百度在线网络技术(北京)有限公司 | Method and device for marking geographical location information of picture |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106777078A (en) * | 2016-12-13 | 2017-05-31 | 广东中星电子有限公司 | A kind of video retrieval method and system based on information database |
WO2018126385A1 (en) * | 2017-01-05 | 2018-07-12 | 深圳市前海中康汇融信息技术有限公司 | Geographic location-based database search method |
CN108460037A (en) * | 2017-02-20 | 2018-08-28 | 北京金奔腾汽车科技有限公司 | A method of stroke video is preserved and retrieved based on geographical location |
CN108197198A (en) * | 2017-12-27 | 2018-06-22 | 百度在线网络技术(北京)有限公司 | A kind of interest point search method, device, equipment and medium |
CN108415454B (en) * | 2018-02-02 | 2021-04-27 | 特力惠信息科技股份有限公司 | Real-time interactive interpretation method and terminal for unmanned aerial vehicle |
CN108415454A (en) * | 2018-02-02 | 2018-08-17 | 福建特力惠信息科技股份有限公司 | A kind of method and terminal of the interpretation of unmanned plane real-time, interactive |
CN108519997A (en) * | 2018-03-07 | 2018-09-11 | 阿里巴巴集团控股有限公司 | The recommendation method and device of related content |
CN108519997B (en) * | 2018-03-07 | 2021-11-23 | 创新先进技术有限公司 | Method and device for recommending related content |
CN109522449A (en) * | 2018-09-28 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | Searching method and device |
CN109598562A (en) * | 2019-01-15 | 2019-04-09 | 深圳市云歌人工智能技术有限公司 | The method, apparatus and electronic equipment of information publication |
CN112650882A (en) * | 2019-10-11 | 2021-04-13 | 杭州海康威视数字技术股份有限公司 | Video acquisition method, device and system |
WO2021137095A1 (en) * | 2019-12-31 | 2021-07-08 | International Business Machines Corporation | Geography aware file dissemination |
US11562094B2 (en) | 2019-12-31 | 2023-01-24 | International Business Machines Corporation | Geography aware file dissemination |
CN112783986A (en) * | 2020-09-23 | 2021-05-11 | 上海芯翌智能科技有限公司 | Object grouping compiling method and device based on label, storage medium and terminal |
Also Published As
Publication number | Publication date |
---|---|
CN105975570B (en) | 2019-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105975570A (en) | Geographic position-based video search method and system | |
WO2017156793A1 (en) | Geographic location-based video processing method | |
CN105827959B (en) | Method for processing video frequency based on geographical location | |
US9664527B2 (en) | Method and apparatus for providing route information in image media | |
CN103226575A (en) | Image processing method and device | |
RU2007112676A (en) | METHOD FOR ADDING GEOGRAPHIC TITLES TO IMAGES AT MOBILE COMMUNICATION TERMINAL | |
JP2013223235A (en) | Radio communication device, memory device, radio communication system, radio communication method and program | |
KR20090002657A (en) | Method for creating image file including information of individual and apparatus thereof | |
CN102547090A (en) | Digital photographing apparatus and methods of providing pictures thereof | |
CN108955715A (en) | navigation video generation method, video navigation method and system | |
JP2008027336A (en) | Location information delivery apparatus, camera, location information delivery method and program | |
CN104850547B (en) | Picture display method and device | |
US8918087B1 (en) | Methods and systems for accessing crowd sourced landscape images | |
EP2798538B1 (en) | Method and apparatus for providing metadata search codes to multimedia | |
CN105933651B (en) | Method and apparatus based on target route jumper connection video | |
KR20150064485A (en) | Method for providing video regarding poi, method for playing video regarding poi, computing device and computer-readable medium | |
CN104572830A (en) | Method and method for processing recommended shooting information | |
CN109168127A (en) | Resource recommendation method, device, electronic equipment and computer-readable medium | |
KR101420884B1 (en) | Method and system for providing image search service for terminal location | |
CN103262495A (en) | Method for transferring multimedia data over a network | |
KR102097199B1 (en) | Method and apparatus for providing image based on position | |
KR20170025732A (en) | Apparatus for presenting travel record, method thereof and computer recordable medium storing the method | |
US20150113039A1 (en) | Method and apparatus for defining hot spot based task for multimedia data | |
KR20090093431A (en) | Method and apparutus for providing guide information | |
CN111428134B (en) | Recommendation information acquisition method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200714 Address after: Room 1718, 301 Qianxin Road, Jinshanwei Town, Jinshan District, Shanghai Patentee after: RUI-GANG INTELLIGENT TECHNOLOGY (SHANGHAI) CO.,LTD. Address before: 518000 A3 building, building three, light Dragon Industrial Zone, Pearl Dragon Road, Shenzhen, Guangdong, Nanshan District, four Patentee before: SHENZHEN ZHIYI TECHNOLOGY DEVELOPMENT Co.,Ltd. |
|
TR01 | Transfer of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20191018 |
|
CF01 | Termination of patent right due to non-payment of annual fee |