CN105827959B - Method for processing video frequency based on geographical location - Google Patents
Method for processing video frequency based on geographical location Download PDFInfo
- Publication number
- CN105827959B CN105827959B CN201610162223.7A CN201610162223A CN105827959B CN 105827959 B CN105827959 B CN 105827959B CN 201610162223 A CN201610162223 A CN 201610162223A CN 105827959 B CN105827959 B CN 105827959B
- Authority
- CN
- China
- Prior art keywords
- video
- mark object
- user
- trajectory line
- module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/7867—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00249—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a photographic apparatus, e.g. a photographic printer or a projector
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Library & Information Science (AREA)
- Human Computer Interaction (AREA)
- Telephonic Communication Services (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The present invention relates to a kind of method for processing video frequency, comprising: parses corresponding geographical location information from every frame of the video data;Corresponding trajectory line is shown in electronic map interface according to the geographical location information;The Origin And Destination for determining the trajectory line segment of user's selection is inputted according to user;The video data for corresponding to the trajectory line segment is extracted from the video data and is generated and corresponding video clip.Above-mentioned method can promote the accuracy and convenience of the video marker of follow shot.In addition, the present invention also provides a kind of video process apparatus.
Description
Technical field
The present invention relates to video processing technique more particularly to a kind of video processing techniques based on geographical location.
Background technique
Currently, required picture is searched out on the net for the convenience of the user, picture owner meeting when uploading publication picture
It is exactly label according to the curriculum offering keyword of picture.In other words, set label is actually to carry out to the picture
Classification.In this way, when user finds picture by respective labels, so that it may search required picture.
But this tagged convenient benefit encounters video file and just loses ample scope for abilities, most important the reason is that picture
The time of shooting, place are fixed, and label substance relatively easily determines, and video file (especially mobile video) is then due to shooting
Time and camera site are all constantly changing, the content of shooting also constantly variation and it is multifarious, it is not easy to sort out define label.
Meanwhile if user only need video file wherein one section when, it is necessary to entire video file is downloaded, is then passed through
Video editor updates, very inconvenient.
Therefore, there is an urgent need for a kind of methods, can add respectively different labels according to the different of video paragraph content, be convenient for user
Associated video and paragraph are directly found according to label, and only calls the video paragraph without downloading whole section of video text as needed
Part.
Summary of the invention
In view of this, it is necessary to provide a kind of, the video based on geographical location handles processing method device, can solve
The problem of prior art mobile video is not easy to marked index.
A kind of method for processing video frequency based on geographical location, comprising:
Corresponding geographical location information is parsed from every frame of the video data;
Corresponding trajectory line is shown in electronic map interface according to the geographical location information;
The Origin And Destination for determining the trajectory line segment of user's selection is inputted according to user;
The video data for corresponding to the trajectory line segment is extracted from the video data and is generated and corresponding video
Segment.
In one embodiment, described that the Origin And Destination packet for determining the trajectory line segment of user's selection is inputted according to user
It includes:
When detecting the interception triggering command of user's input, display first indicates object and second in the trajectory line
Object is indicated, the first mark object and the second mark object indicate the Origin And Destination of the trajectory line segment;And
The position of the first mark object and/or the second mark object is set along the trajectory line depending on the user's operation
It sets.
In one embodiment, the above method further include: progress bar, the progress bar pair are shown in video playing interface
Video data described in Ying Yu;
Also show that third indicates object and the on the progress bar when detecting the interception triggering command of user's input
Four mark objects, the third mark object and the first mark object correspond to same video frame, the 4th mark pair
As corresponding to same video frame with the second mark object;
The position that the third mark object is also accordingly updated when setting the position of the first mark object, works as setting
The position of the 4th mark object is also accordingly updated when the position of the second mark object.
In one embodiment, the above method further include:
The position of the third mark object and/or the 4th mark object is set along the progress bar depending on the user's operation
It sets;
The position that the first mark object is also accordingly updated when setting the position of the third mark object, works as setting
The position of the first mark object is also accordingly updated when the position of the 4th mark object.
In one embodiment, the above method further include: also shown when detecting the interception triggering command of user's input
Description information input interface is received the description information of user's input by the description information input interface, and retouched described
It states information and the video clip is saved or transmitted together.
In one embodiment, the above method further include: if the interception instruction of user's input is detected, by the video
Place's section is stored as individual video file;Or
If detecting the sharing instruction of user's input, the video clip is shared to selected social networking system
In.
A kind of video processing processing unit based on geographical location, comprising:
Parsing module, when for parsing corresponding geographical location information and shooting from every frame of the video data
Between;
Track display module, for being shown in electronic map interface according to the geographical location information and shooting time
Corresponding trajectory line;
Trajectory line selecting module, for inputting the Origin And Destination for determining the trajectory line segment of user's selection according to user;
Video clip generation module, for extracting the video counts for corresponding to the trajectory line segment from the video data
According to and generate and corresponding video clip.
In one embodiment, above-mentioned apparatus further include:
Mode switch module, for showing in the trajectory line the when detecting the interception triggering command of user's input
One mark object and the second mark object, the first mark object and the second mark object indicate rising for the trajectory line segment
Point and terminal;And
Trajectory line editor module, for depending on the user's operation along the trajectory line setting it is described first mark object and/
Or second mark object position.
In one embodiment, the mode switch module is also used to: when the interception triggering command for detecting user's input
When also show that third mark object and the 4th mark object, third mark object and described first are marked on the progress bar
Show that object corresponds to same video frame, the 4th mark object and the second mark object correspond to same video frame;
Described device further includes progress bar editor module, for also corresponding when setting the position of the first mark object
The position for updating the third mark object also accordingly updates the 4th mark when setting the position of the second mark object
Show the position of object.
In one embodiment, the progress bar editor module is also used to set along the progress bar depending on the user's operation
The position of the third mark object and/or the 4th mark object;
The trajectory line editor module is also used to when setting the position of the third mark object described in also corresponding update
The position of first mark object also accordingly updates the first mark object when setting the position of the 4th mark object
Position.
In one embodiment, the mode switch module is also used to: when the interception triggering command for detecting user's input
When also show description information input interface;
Described device further includes that description information editor module is defeated for receiving user by the description information input interface
The description information entered, and the description information and the video clip are saved or transmitted together.
In one embodiment, described device further include: preserving module and/or sharing module;
If the preserving module is used to detect the interception instruction of user's input, section at the video is stored as individually
Video file;
If the sharing module is used to detect the sharing instruction of user's input, the video clip is shared to selected
Social networking system in.
According to above-mentioned technical solution, user can be by selecting trajectory line in the trajectory line that shows in electronic map
Segment realize the selection to video clip, allow the user to be apparent from the geographical location model that video clip is covered
It encloses, improves the accuracy and convenience of the video marker shot in moving process.
For above and other objects, features and advantages of the invention can be clearer and more comprehensible, preferred embodiment is cited below particularly,
And cooperate institute's accompanying drawings, it is described in detail below.
Detailed description of the invention
Fig. 1 is the configuration diagram of the video information process system provided in an embodiment of the present invention based on geographical location.
Fig. 2 is the structural block diagram of the follow shot terminal of the video information process system of Fig. 1.
The data structure schematic diagram for the video data that the follow shot terminal that Fig. 3 is Fig. 2 uploads.
Fig. 4 is the flow chart of the method for processing video frequency based on geographical location of the embodiment of the present invention.
Fig. 5 to Fig. 7 is the interface schematic diagram of the method for Fig. 4.
Fig. 8-Figure 12 is the module map of the video process apparatus based on geographical location of the embodiment of the present invention.
Specific embodiment
Further to illustrate that the present invention is the technical means and efficacy realizing predetermined goal of the invention and being taken, below in conjunction with
Attached drawing and preferred embodiment, to specific embodiment, structure, feature and its effect according to the present invention, detailed description is as follows.
Refering to fig. 1, the frame of the video information process system based on geographical location provided for first embodiment of the invention
Structure schematic diagram.As shown in Figure 1, video information process system 100 can include: follow shot terminal 10, cloud server system 20,
And client 30.
Follow shot terminal 10 can be specifically any mobile electronic terminal with camera such as mobile phone, plate electricity
Brain, unmanned plane etc..Referring to Fig.2, it is the structural schematic diagram of follow shot terminal 10.Follow shot terminal 10 includes memory
102, storage control 104, one or more (one is only shown in figure) processors 106, Peripheral Interface 108, network module
110, voicefrequency circuit 111, GPS (Global Positioning System, global positioning system) module 112, sensor 114,
Photographing module 116 and power module 122.These components are mutually communicated by one or more communication bus/signal wire.
It will appreciated by the skilled person that structure shown in Fig. 2 is only to illustrate, not to follow shot terminal
10 structure causes to limit.For example, follow shot terminal 10 may also include the more or less component than shown in Fig. 2, or
Person has the configuration different from shown in Fig. 2.
Memory 102 can be used for storing software program and module, such as each method and device pair in the embodiment of the present invention
Program instruction/the module answered, the software program and module that processor 106 is stored in memory 102 by operation, to hold
Row various function application and data processing.
Memory 102 may include high speed random access memory, may also include nonvolatile memory, such as one or more magnetic
Property storage device, flash memory or other non-volatile solid state memories.In some instances, memory 102 can further comprise
The memory remotely located relative to processor 106, these remote memories can pass through network connection to above-mentioned server.On
The example for stating network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.Processor 106
And other possible components can carry out the access of memory 102 under the control of storage control 104.
Various input/output devices are couple processor 106 by Peripheral Interface 108.106 run memory 102 of processor
Interior various softwares, the above-mentioned server of instruction perform various functions and carry out data processing.In some embodiments, peripheral hardware connects
Mouth 108, processor 106 and storage control 104 can be realized in one single chip.In some other example, Ta Menke
To be realized respectively by independent chip.
Network module 110 is for receiving and transmitting network signal.Above-mentioned network signal may include wireless signal.At one
In embodiment, 110 essence of network module is radio-frequency module, receives and transmits electromagnetic wave, realizes the phase of electromagnetic wave and electric signal
Mutually conversion, to be communicated with communication network or other equipment.Radio-frequency module may include various existing for executing this
The circuit element of a little functions, for example, antenna, RF transceiver, digital signal processor, encryption/deciphering chip, user identity mould
Block (SIM) card, memory etc..Radio-frequency module can be communicated with various networks such as internet, intranet, wireless network
Or it is communicated by wireless network and other equipment.Above-mentioned wireless network may include cellular telephone networks, wireless local area
Net or Metropolitan Area Network (MAN).Various communication standards, agreement and technology can be used in above-mentioned wireless network, including but not limited to global
Mobile communication system (Global System for Mobile Communication, GSM), enhanced mobile communication technology
(Enhanced Data GSM Environment, EDGE), Wideband CDMA Technology (wideband code division
Multiple access, W-CDMA), Code Division Multiple Access (Code division access, CDMA), time division multiple access technology
(Time Division Multiple Access, TDMA), adopting wireless fidelity technology (Wireless Fidelity, WiFi) is (such as
American Institute of Electrical and Electronics Engineers standard IEEE 802.11a, IEEE 802.11b, IEEE802.11g and/or IEEE
802.11n), the networking telephone (Voice Over Internet Protocol, VoIP), worldwide interoperability for microwave accesses
(Worldwide Interoperability for Microwave Access, Wi-Max), other be used for mail, Instant Messenger
The agreement and any other suitable communications protocol of news and short message, or even may include that those are not developed currently yet
Agreement.
The interface that voicefrequency circuit 111 and offer follow shot terminal 10 are recorded.Specifically, voicefrequency circuit 111 is from microphone
Place receives electric signal, converts electrical signals to voice data, and data transmission in network telephony is further with progress to processor 102
Processing.
GPS module 112 is used to receive the positioning signal of GPS satellite casting, and the position of itself is calculated according to positioning signal
It sets.Above-mentioned position can for example be indicated with longitude, latitude and height above sea level.It is appreciated that realizing the mode and unlimited of positioning
In GPS system.For example, other available global position systems further include Beidou satellite alignment system (Compass
Navigation Satellite System, CNSS) or glonass system (Global Navigation Satellite
System, GLONASS).In addition, positioning is also not limited to and uses satellite positioning tech, for example, wireless location skill also can be used
Art, such as the location technology of the location technology based on wireless base station or WIFI.At this point, GPS module 112 can be replaced by accordingly
Module, or directly execute specific finder via processor 102 to realize.
The example of sensor 114 includes but is not limited to: optical sensor, attitude transducer and other sensors.Wherein,
Ambient light sensor can be with the light and shade of sense ambient light, and then shooting can be adjusted.Attitude transducer for example may include
Acceleration transducer, gravitometer, gyroscope etc. can detect the spatial attitude of follow shot terminal 10 for example in all directions
Rotation angle etc..It is appreciated that the rotation angle in all directions of follow shot terminal 10 both corresponds to shooting direction.Its
His sensor may include barometer, hygrometer, thermometer etc..
Photographing module 116 is for shooting photo or video.The photo or video of shooting can store to memory 104
It is interior, and can be sent by network module 110.Photographing module 116 specifically may include lens module, Image Sensor and flash lamp
Equal components.Lens module to the target imaging being taken, and by imaging for mapping in Image Sensor.Image sensing
Device is for receiving the light from lens module, and realization is photosensitive, to record image information.Specifically, Image Sensor can be based on
Complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS), Charged Couple member
Part (Charge-coupled Device, CCD) or other image sensing principles are realized.Flash lamp for carrying out when shooting
Exposure compensating.It in general, can be light-emitting diode (Light Emitting for the flash lamp of follow shot terminal 10
Diode, LED) flash lamp.
Power module 122 is used to provide power supply to processor 102 and other each components.Specifically, power module
122 may include power-supply management system, one or more power supplys (such as battery or alternating current), charging circuit, power-fail detection
Circuit, inverter, indicator of the power supply status and any other generation, management and distribution with electric power in follow shot terminal 10
Relevant component.
It is stored with software in memory 104 and program module may include operating system 130 and operate in operating system 130
On application program.Operating system 130 its may include it is various for management system task (such as memory management, storage equipment control
System, power management etc.) component software and/or driving, and can mutually be communicated with various hardware or component software, to provide it
The running environment of his component software.The application program can include: shooting module 131, additional information adding module 132, video
Data package module 133 and data transmission blocks 134.
Wherein, shooting module 131 is for calling the shooting of photographing module 116 to obtain video data;Additional information obtains
Modulus block 132 for obtain corresponding with current video frame additional information and by the additional information be added to described in work as forward sight
In frequency frame;Video data package module 133 is used to carry out the data of one or more video frame added with additional information
It is packaged;Data transmission blocks 134 are used to the video data after being packaged being sent to cloud server system 20, so that the cloud
Server system 20 provides the various information provided based on the additional information according to the additional information in the video data received
Service.In addition, above-mentioned software and program module may also include video process apparatus 136, the video that can be used for shooting or
The video for receiving the transmission of other follow shot terminals is handled, such as interception video clip, sharing etc..
As shown in figure 3, in the same video data packet, it may include multiple video frames, and in each video frame simultaneously
Additional information and video data including the video frame, video data can using arbitrary format (such as H.264 or
MPEG4 etc. is stored).
Additional information may include two classes, and one kind is editable additional information, and user can be realized by specifically application
Modification, newly-increased or deletion to this type of information, editable additional information may generally serve to the information of storage user's input;
Another kind of is not editable additional information, and once write-in video frame, user can not again be edited it, not editable attached
Information is added to be generally available to store the status information obtained in real time.
In a specific embodiment, above-mentioned editable additional information can include: label, the text of user's input
The information such as introduction.
In a specific embodiment, above-mentioned editable additional information can include: the generation of the instruction of user's input
Code.The instruction of user's input may include sharing, reporting etc..
In a specific embodiment, above-mentioned not editable additional information can include: location information, such as pass through
Warp, latitude and the height that GPS module 112 is got.
In a specific embodiment, above-mentioned not editable additional information can include: the follow shot terminal
10 posture information, for example, the rotation angle of follow shot terminal 10 or photographing module 116 in all directions.Follow shot
The posture information of terminal 10 can be obtained by sensor 114.
In a specific embodiment, above-mentioned not editable additional information can include: the current video frame
Shooting time.
In a specific embodiment, above-mentioned not editable additional information can include: the user of video capture person
Identity identification information.User identification information herein for example can be account number of the user in a network account system,
Or other can uniquely determine the information of user account number in a network account system.In synchronization, video capture
The user of terminal 20, i.e. video capture person can be limited to an only people.The user can be to be tied up with follow shot terminal 10
Fixed user account number, or it is authorized to use the user account number of follow shot terminal 10.
In a specific embodiment, above-mentioned not editable additional information can include: the current video frame
The check information of video data.The check information is, for example, that hash algorithm is used to be calculated according to the video data, can
For verifying whether the video data is modified.So no matter how the video frame replicates, transmits, and can be based on the school
Test whether information checking video data is modified, so that the authenticity of video data can further confirm that this gives
Video provides technical guarantee as judicial evidence.
For editable additional information, partial video frame can be only written, for example, for one second (or its
His time span) in multiple video frames for generating, editable additional information can be only written fixed video frame (such as the
One frame) in.This has the video frame with editable additional information that can be defined as the key video sequence frame in the time.Using
Editable additional information both can directly be written in video frame, can also reduce editable to the greatest extent for this mode
The memory space that occupies of additional information.
It for not editable additional information, typically obtains, therefore, can be all written in every frame in real time.So
And, however it is not limited to this mode still can be and not editable additional information be only written in partial video frame.For example, every
Not editable additional information is written in a video frame within one second.
In addition, not editable additional information is destroyed or distorts in order to prevent, not editable additional information can be with
Video frame is written after being encrypted using rivest, shamir, adelman.For example, can store in each video capture terminal 10
There is identical public key, not editable additional information is encrypted using the public key.And private corresponding with the public key
Key only has in cloud server system 20 and just has, that is to say, that only cloud server 10 can solution read and write in video frame
Encrypted additional information.
As described above, in the video information process system of the present embodiment, the video data of the upload of follow shot terminal 10
It inside include video data and above-mentioned additional information.
It is the flow chart of the method for processing video frequency provided in an embodiment of the present invention based on geographical location refering to Fig. 4.The party
Method can for example be executed by above-mentioned follow shot terminal 10 or client 30.For follow shot terminal 10, video
Data, which can be, voluntarily to be shot;Video data in client 30 is either the follow shot terminal 10 bound with it is direct
Synchronous, it is also possible to the acquisition from cloud server system 20.
As shown in figure 4, method includes the following steps:
Step S101 parses corresponding geographical location information from every frame of the video data.
Video data herein can be video file, be also possible to video stream data, and unrestricted, as long as often
It include above-mentioned geographical location information in the video data of frame.
Step S102 shows corresponding trajectory line according to the geographical location information in electronic map interface.
Refering to Fig. 5, in an example, according to video data acquired in step s101, shown in electronic map interface 1
It is shown with corresponding trajectory line 11.Trajectory line 11 is whole to correspond to all video datas acquired in step s101.In trajectory line 11
Each point can be mapped to a corresponding video frame, that is to say, that each point in trajectory line 11 can map to pair
The geographical location information answered.
Step S103 inputs the Origin And Destination for determining the trajectory line segment of user's selection according to user.
For example, opening the selection mode of video clip when detecting the interception triggering command of user's input.Section herein
Take triggering command that can trigger by various modes, such as user's long-pressing trajectory line 11, double-click trajectory line 11 or click are specific
Button, menu etc..
When trajectory line Piece Selection mode is activated, the first mark object 12 and the can be shown in trajectory line 11
Two marks are to as 13.First mark object 12 and the second mark object 13 can for example respectively include an icon, be used to indicate
The Origin And Destination of trajectory line segment.Between the first mark object 12 and the second mark object 13 is currently to select
Trajectory line segment 14, trajectory line segment 14 can highlight out, such as carry out overstriking, change the modes such as color for trajectory line
Segment 14 and the main body of trajectory line 11 distinguish.
First mark object 12 and the second mark object 13 can respectively responsive to user operation (such as drag) along track
Line 11 slides.When first mark object 12 and the second mark object 13 are slided along trajectory line 11, trajectory line segment 14 therebetween is just
Corresponding elongation, shortening or mobile.
Step S104, from the video data extract correspond to the trajectory line segment video data and generate with it is right
The video clip answered.
As described above, each point in trajectory line 11 can map in a corresponding video frame, that is to say, that rail
The starting point of trace segment corresponds to starting point video frame, and terminal corresponds to terminal video frame, and video frame itself is that have sequence
(such as by shooting time), accordingly, it is possible to be filtered out in all video datas positioned at the starting point video frame and terminal
All video frames between video frame.
Video clip can be carried out further handling after generating.Being further processed herein, such as can be preservation
For individual video file, or video clip shared into social networks.
Specifically, it refering to Fig. 5, after the selection mode of video clip is activated, can be shown in electronic map interface 1
One menu bar 15 includes multiple buttons in the menu bar 15, is respectively used to allow user by clicking the button to activate not
Same function, such as save video file or sharing video frequency segment.
It in the present embodiment, include interception button and sharing button in menu bar 15, wherein when user completes piece of video
After the selection of section, both after step S104, after intercepting button when the user clicks, that is, executes and the video clip of selection is saved as into list
The process of only video file;And after sharing button when the user clicks, i.e., it is executable to share the video clip of selection to social activity
Process in network.
In another embodiment, it is also being regarded other than showing trajectory line 11 in electronic map interface 1 in step S102
Video pictures are shown in frequency broadcast interface 2.It include progress bar 21 in video playing interface 2, progress bar 21 is whole to correspond to step
All video datas obtained in S101.Each point on progress bar 21 can also be mapped to a correspondence in video data
Video frame on.
Correspondingly, when trajectory line Piece Selection mode is activated, in addition to the first mark of display object in trajectory line 11
12 and second mark to as outside 13, also show third mark object 22 on progress bar 21 and the 4th indicating object 23,
Middle third mark object 22 and the first mark object 12 correspond, and the 4th mark object 23 and the second mark object 23 are one by one
It is corresponding.That is, the first mark object 12 and third mark object 22 correspond to same video frame, and second indicates object 13
Correspond to same video frame with the 4th mark object 23.Third indicates the progress of part between object 22 and the 4th mark object 23
Item 24 then corresponds to the currently selected video clip selected.
When moving the position of the first mark object 12 depending on the user's operation, the also corresponding third that updates indicates object 12
Position, the also corresponding position for updating the 4th mark object 23 when moving the position of the second mark object 13 depending on the user's operation
It sets.
Conversely, also corresponding when moving the position of third mark object 22 depending on the user's operation update the first mark object
12 position, also corresponding the 4th mark object 13 of update when moving the position of the 4th mark object 23 depending on the user's operation
Position.In this way, video clip and third between the first mark object 12 and the second mark object 13 can be made to indicate object 22
Video clip between the 4th mark object 23 is fully synchronized.It is regarded that is, user can both be chosen by trajectory line 11
Frequency segment, can also be by progress bar 21 come selecting video segment.
At this point, above-mentioned menu bar 15 both may be displayed in electronic map interface 1, video playing circle also may be displayed on
In face 2, it can also be simultaneously displayed in electronic map interface 1 and video playing interface 2.Specifically, if electronic map circle
Face 1 detects the interception triggering command of user's input, then menu 15 is shown in electronic map interface 1 or simultaneously electronically
Menu 15 is shown in figure interface 1 and video playing interface 2;If video playing Interface detection refers to the interception triggering that user inputs
It enables, then shows menu 15 in video playing interface 2 or shown in electronic map interface 1 and video playing interface 2 simultaneously
Menu 15.
In another embodiment, together refering to Fig. 5 and Fig. 6, above-mentioned method can also include to the piece of video of selection
The step of Duan Tianjia description information.For example, further include label button 16 in menu bar 15, label button 16 when the user clicks
When, input interface can be popped up, user is allowed to input to the description information of current video segment addition, after user completes to input,
The description information that user has inputted can also be shown in menu bar 15.Description information herein, can be label, can also
To be introductory text.Refering to Fig. 6, in the present embodiment, user inputs two labels: BMW and collision.Correspondingly, in the dish
In single column 15, added label is just shown.For added label, user can be deleted.User can also give
Current video segment continues to add label.
After user is added to description information to video clip, the description information of these additions be can be applied to further
In treatment process.For example, these description informations can be written in video clip.If what user clicked is interception button, also
It can include the label or other description informations of addition in the file name of preservation;If what user clicked is to share button,
Then the description information that user inputs can be sent to social networking system together, description information is issued together and can allow society
Hand over network system storage description information to be retrieved.
In another embodiment, after video clip of the user to selection is added to description information, video clip is carried out
User description information can be also synchronized to cloud clothes by the terminal (i.e. above-mentioned follow shot terminal 10 or client 30) of editor
It is engaged in device system 20, so that cloud server system 20 is by the description information and corresponding video data associated storage, or will retouch
Information is stated to be written in corresponding video frame.In this way, just completely saving the view of user's addition beyond the clouds in server system 20
Frequency description information can be used for scanning for video, or is realized more complicated function based on these description informations and answered
With.For example, it is for statistical analysis to label based on geographical location, it can find whether some place has occurred hot ticket, into
And the corresponding video of hot ticket can be issued in content delivering system (such as video website).
After detecting the predetermined instruction of user's input, after the preservation or sharing of completing video clip, it can hide
The interfaces such as the editing interface of trajectory line, i.e. hide menu column 15, label button 16.Trajectory line segment 14 still can use and rail
The different format of trace 11 is shown, but can no longer show the first mark object 12 and the second mark object 13.
The edit pattern (i.e. the selection mode of video clip) of trajectory line segment 14 can be activated again, refering to Fig. 6, example
Such as when the user clicks after trajectory line segment 14, it can show that the first mark object 12, second indicates object 13, Yi Jicai again
Single column 15.Refering to the position of adjustable 14 Origin And Destination of path segment of user, label, newly-increased label or modification mark are deleted
Label.
In another embodiment, multiple trajectory line segments 14, these trajectory lines can be set in the same trajectory line 11
Segment 14 can not overlap or partly overlap.For example, as shown in fig. 6, there are three being total in the present embodiment, in trajectory line 11
Mutual nonoverlapping trajectory line segment 14,17 and 18.
In another embodiment, trajectory line segment 14 can be also used for triggering other function, for example, working as trajectory line segment
14 when being double-clicked, so that it may start to play video clip corresponding with trajectory line segment 14 in video playing interface 2.
The video clip and label data editted based on user, cloud server system 20 can provide relevant clothes
Business.Refering to Fig. 7, the interface schematic diagram of electronic map in a client 30 can show trajectory line 11 and other use
Family editor complete trajectory line segment 14,17 and 18, the difference is that, in client 30, user can not be to trajectory line
Video 14,17 and 18 carries out edit-modify.And the label that other users editor completes also only is shown in tab bar 16, it can not be deleted
Except modification.
According to the technical solution of above-described embodiment, user can be by selecting in the trajectory line that shows in electronic map
The segment of trajectory line realizes the selection to video clip, allows the user to be apparent from the geography that video clip is covered
Position range improves the accuracy of the video marker shot in moving process.
Refering to Fig. 8, the embodiment of the present invention also provides a kind of video process apparatus comprising: parsing module 31, track are shown
Module 32, trajectory line selecting module 33 and video clip generation module 35.It is appreciated that video process apparatus shown in Fig. 8
It can be a specific embodiment of video process apparatus 136 shown in Fig. 3.
Parsing module 31 is for parsing corresponding geographical location information and shooting from every frame of the video data
Time.
Track display module 32 is used to be shown in electronic map interface according to the geographical location information and shooting time
Show corresponding trajectory line.
Trajectory line selecting module 33 is used to input the Origin And Destination for determining the trajectory line segment of user's selection according to user.
When video clip generation module 35 is located at starting point shooting for extracting shooting time from the video data
Between video data between terminal shooting time, and generate video clip corresponding with the trajectory line segment to carry out into one
The processing of step.
Refering to Fig. 9, in another embodiment, above-mentioned video process apparatus further include: mode switch module 36 and
Trajectory line editor module 37.
Mode switch module 36 is used to show in the trajectory line when detecting the interception triggering command of user's input
First mark object and the second mark object, the first mark object and the second mark object indicate the trajectory line segment
Origin And Destination.
Trajectory line editor module 37 for depending on the user's operation along the trajectory line setting it is described first mark object and/
Or second mark object position.
Refering to fig. 10, in another embodiment, above-mentioned video data processing apparatus further include: progress bar editor module
38, for also accordingly updating the position of the third mark object when setting the position of the first mark object, work as setting
The position of the 4th mark object is also accordingly updated when the position of the second mark object.
Further, the progress bar editor module 38 is also used to depending on the user's operation along described in progress bar setting
Third indicates the position of object and/or the 4th mark object, and correspondingly, trajectory line editor module 37 is also used to when setting described the
The position that the first mark object is also accordingly updated when the position of three mark objects, when the position for setting the 4th mark object
The position of the first mark object is also accordingly updated when setting.
Refering to fig. 11, in another embodiment, mode switch module 36 is also used to when the interception that detect user's input
Description information input interface is also shown when triggering command;Video process apparatus further include: description information editor module 39, for leading to
Cross the description information that the description information input interface receives user's input, and by the description information and the video clip
It saves or transmits together.
Refering to fig. 12, in another embodiment, above-mentioned video data processing apparatus further include: preserving module 40 and/
Or sharing module 41;If preserving module 40 is used to detect the interception instruction of user's input, section at the video is stored as
Individual video file;If sharing module 41 is used to detect the sharing instruction of user's input, the video clip is shared
Into selected social networking system.
According to the technical solution of above-described embodiment, user can be by selecting in the trajectory line that shows in electronic map
The segment of trajectory line realizes the selection to video clip, allows the user to be apparent from the geography that video clip is covered
Position range improves the accuracy of the video marker shot in moving process.
The above described is only a preferred embodiment of the present invention, be not intended to limit the present invention in any form, though
So the present invention is disclosed as above with preferred embodiment, and however, it is not intended to limit the invention, anyone skilled in the art, not
It is detached within the scope of technical solution of the present invention, when the technology contents using the disclosure above are modified or are modified to equivalent change
The equivalent embodiment of change, but without departing from the technical solutions of the present invention, implement according to the technical essence of the invention to above
Any brief introduction modification, equivalent variations and modification made by example, all of which are still within the scope of the technical scheme of the invention.
Claims (6)
1. a kind of method for processing video frequency based on geographical location characterized by comprising
Corresponding geographical location information is parsed from every frame of the video data;
Corresponding trajectory line is shown in electronic map interface according to the geographical location information;
The Origin And Destination for determining the trajectory line segment of user's selection is inputted according to user;
When detecting the interception triggering command of user's input, the first mark object of display and second is indicated in the trajectory line
Object, the first mark object and the second mark object indicate the Origin And Destination of the trajectory line segment;And
The position of the first mark object and/or the second mark object is set along the trajectory line depending on the user's operation;
Show that progress bar, the progress bar correspond to the video data for directly choosing the view in video playing interface
The video clip of frequency evidence;
Third mark object and the 4th mark are also shown on the progress bar when detecting the interception triggering command of user's input
Show object, the third mark object and the first mark object correspond to same video frame, the 4th mark object with
The second mark object corresponds to same video frame;
The position that the third mark object is also accordingly updated when setting the position of the first mark object, described in setting
The position of the 4th mark object is also accordingly updated when the position of the second mark object;
The position that the first mark object is also accordingly updated when setting the position of the third mark object, described in setting
The position of the second mark object is also accordingly updated when the position of the 4th mark object;
The video data for corresponding to the trajectory line segment is extracted from the video data and is generated and corresponding video clip.
2. as described in claim 1 based on the method for processing video frequency in geographical location, which is characterized in that further include: when detecting
Description information input interface is also shown when the interception triggering command of user's input, is received and is used by the description information input interface
The description information of family input, and the description information and the video clip are saved or transmitted together.
3. as described in claim 1 based on the method for processing video frequency in geographical location, which is characterized in that further include: if detecting
The interception instruction of user's input, then be stored as individual video file for section at the video;Or
If detecting the sharing instruction of user's input, the video clip is shared into selected social networking system.
4. a kind of video based on geographical location handles processing unit characterized by comprising
Parsing module, for parsing corresponding geographical location information and shooting time from every frame of the video data;
Track display module, for showing correspondence in electronic map interface according to the geographical location information and shooting time
Trajectory line;
Trajectory line selecting module, for inputting the Origin And Destination for determining the trajectory line segment of user's selection according to user;
Mode switch module, for display first to be marked in the trajectory line when detecting the interception triggering command of user's input
Show object and second mark object, it is described first mark object with second mark object indicate the trajectory line segment starting point and
Terminal;And
Trajectory line editor module, for depending on the user's operation along the trajectory line setting first mark object and/or the
The position of two mark objects;
The mode switch module is also used to show that progress bar, the progress bar correspond to the view in video playing interface
Frequency also exists according to the video clip for directly choosing the video data when detecting the interception triggering command of user's input
Third mark object and the 4th mark object, the third mark object and the first mark object are shown on the progress bar
Corresponding to same video frame, the 4th mark object and the second mark object correspond to same video frame;
Progress bar editor module, for also accordingly updating the third mark pair when setting the position of the first mark object
The position of elephant also accordingly updates the position of the 4th mark object when setting the position of the second mark object;When setting
The position that the first mark object is also accordingly updated when the position of the fixed third mark object, indicates when setting the described 4th
The position of the second mark object is also accordingly updated when the position of object;
Video clip generation module, for from the video data extract correspond to the trajectory line segment video data simultaneously
It generates and corresponding video clip.
5. the video based on geographical location handles processing unit as claimed in claim 4, which is characterized in that the pattern switching
Module is also used to: also showing description information input interface when detecting the interception triggering command of user's input;
Described device further includes description information editor module, for receiving user's input by the description information input interface
Description information, and the description information and the video clip are saved or transmitted together.
6. the video based on geographical location handles processing unit as claimed in claim 4, which is characterized in that described device is also wrapped
It includes: preserving module and/or sharing module;
If the preserving module is used to detect the interception instruction of user's input, section at the video is stored as individually regarding
Frequency file;
If the sharing module is used to detect the sharing instruction of user's input, the video clip is shared to selected society
It hands in network system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610162223.7A CN105827959B (en) | 2016-03-21 | 2016-03-21 | Method for processing video frequency based on geographical location |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610162223.7A CN105827959B (en) | 2016-03-21 | 2016-03-21 | Method for processing video frequency based on geographical location |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105827959A CN105827959A (en) | 2016-08-03 |
CN105827959B true CN105827959B (en) | 2019-06-25 |
Family
ID=56524193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610162223.7A Active CN105827959B (en) | 2016-03-21 | 2016-03-21 | Method for processing video frequency based on geographical location |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105827959B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109076263B (en) * | 2017-12-29 | 2021-06-22 | 深圳市大疆创新科技有限公司 | Video data processing method, device, system and storage medium |
CN108388649B (en) * | 2018-02-28 | 2021-06-22 | 深圳市科迈爱康科技有限公司 | Method, system, device and storage medium for processing audio and video |
CN108509132B (en) * | 2018-03-29 | 2020-06-16 | 杭州电魂网络科技股份有限公司 | Position progress bar display method and device and readable storage medium |
CN109540122B (en) * | 2018-11-14 | 2022-11-04 | 中国银联股份有限公司 | Method and device for constructing map model |
CN109743324B (en) * | 2019-01-11 | 2021-12-24 | 郑州嘉晨电器有限公司 | Vehicle positioning protection system |
CN112261483B (en) * | 2020-10-21 | 2023-06-23 | 南京维沃软件技术有限公司 | Video output method and device |
CN112367555B (en) * | 2020-11-11 | 2023-03-24 | 深圳市睿鑫通科技有限公司 | gps data encryption and gps video track playing system |
CN113992976B (en) * | 2021-10-19 | 2023-10-20 | 咪咕视讯科技有限公司 | Video playing method, device, equipment and computer storage medium |
CN115225971A (en) * | 2022-06-24 | 2022-10-21 | 网易(杭州)网络有限公司 | Video progress adjusting method and device, computer equipment and storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102162735A (en) * | 2010-02-23 | 2011-08-24 | 王昊 | Method, system and mobile communication terminal for obtaining travelling route through image data |
CN101924925A (en) * | 2010-07-30 | 2010-12-22 | 深圳市同洲电子股份有限公司 | Method, system and user interface for playback of monitoring videos and vehicle traveling track |
CN102521253B (en) * | 2011-11-17 | 2013-05-22 | 西安交通大学 | Visual multi-media management method of network users |
CN103165153B (en) * | 2011-12-14 | 2016-03-23 | 中国电信股份有限公司 | A kind of method and mobile video terminal by recording location trajectory broadcasting video |
CN103491450A (en) * | 2013-09-25 | 2014-01-01 | 深圳市金立通信设备有限公司 | Setting method of playback fragment of media stream and terminal |
CN104679873A (en) * | 2015-03-09 | 2015-06-03 | 深圳市道通智能航空技术有限公司 | Aircraft tracing method and aircraft tracing system |
-
2016
- 2016-03-21 CN CN201610162223.7A patent/CN105827959B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN105827959A (en) | 2016-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105827959B (en) | Method for processing video frequency based on geographical location | |
US10921803B2 (en) | Method and device for controlling flight of unmanned aerial vehicle and remote controller | |
CN105975570B (en) | Video searching method and system based on geographical location | |
US9721392B2 (en) | Server, client terminal, system, and program for presenting landscapes | |
US9582937B2 (en) | Method, apparatus and computer program product for displaying an indication of an object within a current field of view | |
US9080877B2 (en) | Customizing destination images while reaching towards a desired task | |
US9664527B2 (en) | Method and apparatus for providing route information in image media | |
US20150054981A1 (en) | Method, electronic device, and computer program product | |
WO2017156793A1 (en) | Geographic location-based video processing method | |
CN103916473B (en) | Travel information processing method and relevant apparatus | |
JP2014127148A5 (en) | ||
CN103080928A (en) | Method and apparatus for providing a localized virtual reality environment | |
US20230284000A1 (en) | Mobile information terminal, information presentation system and information presentation method | |
CN104850547B (en) | Picture display method and device | |
JP2011233005A (en) | Object displaying device, system, and method | |
CN103688572A (en) | Systems and methods for audio roaming for mobile devices, group information server among mobile devices, and defining group of users with mobile devices | |
KR20120126529A (en) | ANALYSIS METHOD AND SYSTEM OF CORRELATION BETWEEN USERS USING Exchangeable Image File Format | |
CN105933651B (en) | Method and apparatus based on target route jumper connection video | |
CN105917329A (en) | Information display device and information display program | |
CN108241678B (en) | Method and device for mining point of interest data | |
JP2006292611A (en) | Positioning system | |
KR20110136084A (en) | Apparatus and method for searching of content in a portable terminal | |
KR20150058607A (en) | Method for oupputing synthesized image, a terminal and a server thereof | |
CN114384567A (en) | Positioning method and related device | |
KR101729115B1 (en) | Mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20200716 Address after: Room 1718, 301 Qianxin Road, Jinshanwei Town, Jinshan District, Shanghai Patentee after: RUIGANG INTELLIGENT TECHNOLOGY (SHANGHAI) Co.,Ltd. Address before: 518000 A3 building, building three, light Dragon Industrial Zone, Pearl Dragon Road, Shenzhen, Guangdong, Nanshan District, four Patentee before: SHENZHEN ZHIYI TECHNOLOGY DEVELOPMENT Co.,Ltd. |
|
TR01 | Transfer of patent right |